howtouselinux

3 Ways to check Too Many Open Files Error in Linux

Table of Contents

When the “Too Many Open Files” error message is written to the logs, it indicates that all available file handles for the process have been used (this includes sockets as well).

In a majority of cases, this is the result of file handles being leaked by some part of the application. ulimit is a command in Unix/Linux which allows to set system limits for all properties.

Understanding Too Many Open Files Error in Linux

“Too many open files“ errors happen when a process needs to open more files than it is allowed by the operating system. This number is controlled by the maximum number of file descriptors the process has.

file-max is the maximum file descriptors (FD) enforced on a kernel level, which cannot be surpassed by all processes. The ulimit is enforced on a process level, which can be less than the file-max.

In some scenarios, even though the ulimit has been correctly configured, the total number of open files allowed for the entire system might be configured to a value less than the total number of files opened (by all processes).

If a process now tries to open file and we will hit the maximum number of files allowed for the system.

This error is generated when the open file limit for a user or system exceeds the default setting.

Check Too many open files in SYSTEM Wide settings in Linux

To see the settings for maximum open files for the OS level, use following command:

# cat /proc/sys/fs/file-max

To change the system wide maximum open files, as root edit the /etc/sysctl.conf and add the following to the end of the file:

fs.file-max = 495000

Then issue the following command to activate this change to the live system:

# sysctl -p

Check Too many open files Per USER Settings in Linux

We can see the setting for maximum open files for a user, as root issue the following commands:

# su – <user>
$ ulimit -n

The default setting for this is usually 1024. If more is needed for a specific user then as root modify it in the /etc/security/limits.conf file:

user – nofile 2048

Check Too many open files in systemd service on Linux

If we start a instance as a systemd service, we can specify limits within the [Service] section of its service file. The service file has a location like /etc/systemd/system/<process-name>.service.

we can set limits by using resource limit directives.

Specify the Recommended ulimit Settings, as in the following example:

[Service]
# Other directives omitted
# (file size)
LimitFSIZE=infinity
# (cpu time)
LimitCPU=infinity
# (virtual memory size)
LimitAS=infinity
# (locked-in-memory size)
LimitMEMLOCK=infinity
# (open files)
LimitNOFILE=64000
# (processes/threads)
LimitNPROC=64000

Quick Guide on how to fix Too Many Open Files in Linux

su to the user we want to check. we use root user as an example below.
su – root

Check max file number limit in Linux

#Check max limit in Linux
cat /proc/sys/fs/file-max

#Check number of global open file descriptors in use
sysctl fs.nr_open

#Check Hard Limit in Linux
ulimit -Hn

#Check Soft Limits in Linux
ulimit -Sn

#Check current number of open files being used by root
lsof -u root | awk ‘BEGIN { total = 0; } $4 ~ /^[0-9]/ { total += 1 } END { print total }’

#To increase roots limits
add/modifty the below to the limits.conf file
vi /etc/security/limits.conf

root soft nofile 2048
root hard nofile 4096

#To increase the system-wide limits
vi /etc/sysctl.conf
Add following:
fs.file-max = 3213254

#To make the changes take effect
Run:
sysctl -p

Open a new shell and verify that the changes have taken effect
the new values should be present by running

ulimit -Hn
ulimit -Sn

As the limits of root was changed, we will need to restart the application from a new shell window, or restart the system for the changes to take effect.

Is there a limit to the number of open files a user can have in Linux?

There is no definitive answer to this question as it can vary depending on the system and the configuration. However, the maximum number of open files that a process can have is typically limited to around 1024-2048 file descriptors. There are ways to increase this limit if necessary, but it’s best to try and stay within the recommended range whenever possible.

David Cao
David Cao

Hey there! I am David, a Cloud & DevOps Enthusiast and 18 years of experience as a Linux engineer. I work with AWS, Git & GitHub, Linux, Python, Ansible, and Bash. I am a technical blogger and a Software Engineer, enjoy sharing my learning and contributing to open-source.