Bash Shell

Have you ever heard of tmpfs in Linux? It is that small temporary file system residing within memory and installed by default on Linux distributions. Ultra-fast but usually limited in size. Can one grow it easily? Find out & more!

What Is tmpfs?

Everything inside of a computer has a certain native speed of operation. It starts with the CPU (the Central Processing Unit) in your computer, which has a set of L1-Lx caches (Level 1 to Level x), which are very small (for example, 16Kb) but ultra-fast (and likely ultra-expensive).

After the CPU caches, there are the main memory banks, which are still much faster (and byte-for-byte more expensive) than hard disk and so on. As you can see, it’s about cost versus size versus speed. The general rule is that the cost goes up as speed goes up and that the size will come down to limit the cost, etc.

If you would store all of your data inside the main memory chips of your computer, which is technically ultra-possible and rather easy to do, your work would literally fly compared to when you’re using disk alone, as memory chips are much faster than most hard disk drives.

There are, however, some technical limitations with this. Once you shut down your computer, your files will be gone. An unrecoverable application crash could be enough to make you have to restart your computer and lose your files. Also, you could never shut down your computer again, unless it had some advanced feature (which doesn’t exist as far as I know) to maintain your files in the memory chips, similar to a BBU-supported (Battery Backup Unit) cache on a raid controller.

Note that there is, however, one similar (but not identical) feature to this that you might already be using: When you suspend your system to RAM (using sleep or other likewise terminology employed by operating systems), some power will be continually provided to your memory chips to maintain their current data.

Then, when you resume your system, you will be able to continue where you left off. But shutting down while maintaining memory contents is generally not used with computers. It might potentially be used by smart tablets, although one could argue that such states are not true shutdown states, but rather, low-power states.

Having clarified how, generally speaking, it’s likely not a solid idea to save your files to memory chips, there are some other uses where it could come in handy. For example, when doing testing/quality assurance against programs, you are likely going to be starting and shutting down the program under test many times.

Such files are temporary and of little individual value (unless a bug is found, at which point, the data can be copied back to the main disk), and as such, could be stored in your memory chips. This is what tmpfs is and does: It’s a temporary file system inside your memory. Some of you might immediately object and say, “That’s not true,” and you would be correct. Read on.

Those who would have objected might have immediately made the assertion that the tmpfs space is not guaranteed to be in memory, and this is true. You can see tmpfs as a hybrid between a true ram disk (a disk created in volatile memory) and actual permanent disk storage. In certain cases, the Linux kernel will swap out tmpfs content into the system swap space, which could be on disk. This is done transparently (without user interaction being necessary).

If you want to learn more about setting up a ramdisk instead, see our How to Create a RAM Drive in Linux guide.

Your Current tmpfs Size

Now that we have tmpfs space better defined, let’s take a look at the current size of your tmpfs space. You might think of the tmpfs system like a virtual, temporary, volatile drive. You can see the space in use using df -h (file system disk space usage (df) in human-readable format, thanks to the -h option):

df -h | grep -Ei 'shm|size'

Checking the current tmpfs usage using df -h

Here, we use df -h (explained above) combined with a pipe (|) to send the resulting output from the df -h to a grep, which uses an extended (-E) regex format in a case-insensitive manner (-i or simply i when combined with another option) to select the top title (which includes the word Size) and any line that includes the text shm.

The reason that we grep for shm is that almost always, as we can also see in the output here, the tmpfs space is mapped to the file system directory /dev/shm. If the above command doesn’t generate any output, simply execute df -h and review the total results to look for tmpfs space, if any.

Note that by default, the operating system will also allocate/set some tmpfs spaces, which might, for example, be mapped/mounted to the /sys/fs/cgroup, /run and /run/lock directories. Please do not try and modify these.

As for the /dev/shm and other directories, please note that seeing these folders in the operating system directory structure [tree] doesn’t mean that the files are actually/physically stored in-disk in some /dev/shm directory!

It simply means that, according to and in line with the standard Linux way of being able to mount drives (or in this case, tmpfs) to any directory in the file system hierarchy, if there were files in such a directory prior to the mounting, they would simply not be visible until the mount point was dropped.

On this particular system, the tmpfs space is 32GB, and almost all of that is unused. The little that is in use (166MB) is the directory meta/index table itself, which is invisible to users, but in place to be able to store files into the file system structure. The size of the tmpfs is, by default, half of the system memory for given operating systems.

This is a fairly large tmpfs and would only be technically justified, to some extent, on a system with, for example, 40GB or more of physical memory, although, depending on the circumstances, a particular use case might warrant other setups. In general, I recommend keeping the tmpfs space less than, let’s say, 70-80% of the memory, and that percentage would be significantly lower if RAM (Random Access Memory, another shorthand or way to refer to your system memory) was small to start with.

The reason for this is that you want to leave enough memory space available for running other programs, operating systems, and software services.

For example, you wouldn’t want to allocate 80% of memory if your system had 2GB of memory, as this would likely leave way too little for other things to operate correctly or at all. If, on the other hand, you have a hefty 256GB of memory, even 90% of that (230.4GB) would leave a nice 25GB available, which—depending on the use case—might (or might not) be plenty.

In summary, I recommend that you always tune the size of tmpfs depending on 1) how much space you really need in tmpfs, 2) how much memory is in your system, and 3) how much actual memory other applications are using besides tmpfs (including your operating, services, etc.). Experience with all of these things helps here.

Enlarging tmpfs on Your System

Now that we know the size of the current tmpfs volume and what size to give it in the future using some of the considerations provided in the last paragraph above, we can take a look at enlarging (or shrinking) our tmpfs space.

Doing so is quite easy. All we have to do is, rather than having the operating system automatically configuring the /dev/shm tmpfs space for us, define it statically instead in the regular /etc/fstab file, which controls drive mappings at startup. We simply add:

# <file system> <mount point> <type> <options>                                       <dump> <pass>
tmpfs           /dev/shm      tmpfs  defaults,rw,nodev,nofail,noatime,nosuid,size=2G   0      0

to the /etc/fstab file. Do not copy the first remarked (# prefixed) line, as that will already be there. Also, change the 2G (2GB) size to your calculated/estimated requirement for tmpfs size. It’s likely not helpful to set this close to or over the size of available memory, as we explained earlier.

The header markings in the /etc/fstab file clarify the meanings of the fields, and you can find more information in the fstab manual, which can be obtained by typing man fstab. Basically, we’re asking the operating system to mount a tmpfs file system (The location of the system is in memory, so rather than specifying a device, we indicate tmpfs.) and that of the type tmpfs, mounted at /dev/shm.

We also set a number of options. You’ll likely want to leave the defaults,rw,nodev,nofail at a minimum (Use defaults, read/write access, no physical device present, and do not fail to start the operating system if this mount somehow fails.).

You could also choose to leave the optimizations options noatime,nosuid if you like (which is ideal for testing/QA setups, as it will make it faster while keeping less information about files on the tmpfs space), or you can remove those (and their matching commas). Also, leave (and change/specify the size) the size=xG parameter. Finally, we have a rather standard 0 0 for dump and pass (ref man fstab for more info on these two options).

Once the change has been made, simply restart your system and execute df -h to verify that your /dev/shm tmpfs space is now at the new size that you set it to be. If something went amiss, simply check dmesg (typed at your command prompt) and scan upward for any red error messages (You likely have to scroll.) to find out what went amiss. Even though something went amiss, the system should have started fine anyway, provided that you didn’t remove the nofail option.

For Ubuntu users, there is, however, a possible nofail bug to be aware of. There used to be an alternative nobootwait option [instead], although it’s not clear whether this is still usable and if so, on what versions of Ubuntu and its derivatives [only]. As a final alternative to test, provided that SystemD is being used, one could consider using the x-systemd.device-timeout=10 timeout, where 10 is the number of seconds one is willing to wait during startup.

Wrapping up

In this article, we took a thorough look at tmpfs sizing, keeping in mind the use case and other factors.

We also explained tmpfs in detail, discovered how to find the current size of the machine’s tmpfs file system, and finally looked at how to resize tmpfs.

Enjoy using tmpfs!

Profile Photo for Roel Van de Paar Roel Van de Paar
Roel has 25 years of experience in IT & business, 9 years of leading teams, and 5 years in hiring & building teams. He worked for companies like Oracle, Volvo, Sun, Percona, Siemens, Karat, and now MariaDB in various senior, principal, lead, and managerial roles.
Read Full Bio »