linux – Is there an upper limit on the number of files and a maximum capacity per directory on Ubuntu?

Question: Question:

Is there an upper limit on the number of files and a maximum capacity per directory on Ubuntu?
Can you tell me if there is one?

Currently, I am building a CMS and saving the thumbnail image of the article in association with the article ID.
I was worried about the above.
I am wondering if there will be a problem if hundreds of thousands of articles are written and thumbnails are set accordingly.
I am using Ubuntu 14.04 LTS.

Answer: Answer:

Assuming no quotas have been set, he will either reach the maximum number of files inode or be capped by the maximum capacity of her mount point.

Regarding performance, when listing or searching for a group of files in a directory, the more files there are, the higher the load and the worse the performance. If you know the file name and can read and write the file directly, no performance degradation will be seen.

I don't know the CMS, but if the link between the article and the thumbnail image is managed by DB etc. and he knows the file name, I think that there is no deterioration in the read / write performance of the file.

If you search for files in the directory to find images, it will increase CPU load, memory usage, disk IO, and response time will be longer.

Also, file operations with Shell are troublesome. (Ls never ends, filename expansion doesn't end, too many arguments can't be passed to the command, etc.)

How many performances will deteriorate depends on your environment, so it is a good idea to perform performance measurements to determine the appropriate number.

Below are the results of an experiment to see how far a file can be created.


  • VirtualBox VM
    • 1024 MB Memory
    • Processor 6
    • Sotrage Controller: IDE
    • Disk 1 16.00GB main.vdi (/ dev / sda)
    • Disk 2 100.00 GB test.vdi (/ dev / sdb)
  • Centos7.1

After building the 100GB ext4 file system, the Inodes value was 10 used and 6,553,590 were free.

FileSystem           Inodes    IUsed     IFree IUse% Mounted on
/dev/sdb1           6553600       10   6553590    1% /var/test

I tried to create 100 million files by splitting a 1,000,000,000 Bytes file by 10 bytes, but it ended with an error.

# dd if=/dev/zero of=zero.dat bs=1000 count=1000000

# split -a 8 -b 10 zero.dat
split: xaaaoiwra: No space left on device

The number of files was the same as the value displayed in IFree. While counting, the ls command took more than an hour with the CPU consuming about 30-100% and memory about 85%.

# time ls -1 | wc -l

real    68m27.671s
user    1m2.893s
sys     21m7.490s

IUse% is 100% in use and you can see that no more files can be created.

# df -i
Filesystem    Inodes    IUsed   IUse% Mounted on
/dev/sdb1    6553600  6553600    100% /var/test

The capacity of the disk is 27GB Used, so there is still room.

# df -h
Filesystem    Size  Used  Avail  Use% Mounted on
/dev/sdb1      99G   27G    68G   29% /var/test

There was no noticeable delay when reading and writing by directly specifying the file name.

# time od -x xaaaaxxxx
00000000 0000 0000 0000 0000 0000

real     0m0.169s
user     0m0.000s
sys      0m0.008s

Completing file names and expanding regular expressions in Shell takes a terrible amount of time, and there are too many arguments, so rm xa * etc. resulted in an error.

I deleted the file as follows.

# ls -1 | egrep -e '^xa*$' |xargs rm
Scroll to Top