3 Ways to check Too Many Open Files Error

Table of Contents

When the “Too Many Open Files” error message is written to the logs, it indicates that all available file handles for the process have been used (this includes sockets as well).

In a majority of cases, this is the result of file handles being leaked by some part of the application. ulimit is a command in Unix/Linux which allows to set system limits for all properties.

Understanding Too Many Open Files Error

"Too many open files errors happen when a process needs to open more files than it is allowed by the operating system. This number is controlled by the maximum number of file descriptors the process has.

file-max is the maximum file descriptors (FD) enforced on a kernel level, which cannot be surpassed by all processes. The ulimit is enforced on a process level, which can be less than the file-max.

In some scenarios, even though the ulimit has been correctly configured, the total number of open files allowed for the entire system might be configured to a value less than the total number of files opened (by all processes).

If a process now tries to open file and we will hit the maximum number of files allowed for the system.

This error is generated when the open file limit for a user or system exceeds the default setting.

How to fix Too many files open in SYSTEM Wide settings

To see the settings for maximum open files for the OS level, use following command:

# cat /proc/sys/fs/file-max

To change the system wide maximum open files, as root edit the /etc/sysctl.conf and add the following to the end of the file:

fs.file-max = 495000

Then issue the following command to activate this change to the live system:

# sysctl -p

How to fix Too many files open Per USER Settings

We can see the setting for maximum open files for a user, as root issue the following commands:

# su – <user>
$ ulimit -n

The default setting for this is usually 1024. If more is needed for a specific user then as root modify it in the /etc/security/limits.conf file:

user – nofile 2048


How to fix Too many files open in systemd service

If we start a instance as a systemd service, we can specify limits within the [Service] section of its service file. The service file has a location like /etc/systemd/system/<process-name>.service.

we can set limits by using resource limit directives.

Specify the Recommended ulimit Settings, as in the following example:

# Other directives omitted
# (file size)
# (cpu time)
# (virtual memory size)
# (locked-in-memory size)
# (open files)
# (processes/threads)

Quick Guide how to fix Too Many Open Files Error

su to the user we want to check. we use root user as an example below.
su – root

Check max file number limit in Linux

#Check max limit in Linux
cat /proc/sys/fs/file-max

#Check number of global open file descriptors in use
sysctl fs.nr_open

#Check Hard Limit in Linux
ulimit -Hn

#Check Soft Limits in Linux
ulimit -Sn

#Check current number of open files being used by root
lsof -u root | awk ‘BEGIN { total = 0; } $4 ~ /^[0-9]/ { total += 1 } END { print total }’

#To increase roots limits
add/modifty the below to the limits.conf file
vi /etc/security/limits.conf

root soft nofile 2048
root hard nofile 4096

#To increase the system-wide limits
vi /etc/sysctl.conf
Add following:
fs.file-max = 3213254

#To make the changes take effect
sysctl -p

Open a new shell and verify that the changes have taken effect
the new values should be present by running

ulimit -Hn
ulimit -Sn

As the limits of root was changed, we will need to restart the application from a new shell window, or restart the system for the changes to take effect.

Table of Contents

Share on facebook
Share on twitter
Share on linkedin

You might also like