Monitoring memory consumption is a fundamental task for any PostgreSQL administrator. Unlike simple applications, PostgreSQL uses a sophisticated memory architecture that splits resources between a large shared pool and smaller, session-local areas for specific tasks like sorting and joining. Understanding how these segments are utilized is the key to preventing “Out of Memory” (OOM) errors and ensuring that your queries aren’t bottlenecked by slow disk I/O.
This guide provides practical SQL queries to verify how much memory your database is using and, more importantly, how efficiently it is using it. By analyzing cache hit ratios and temporary file generation, you can determine if your current settings require a strategic adjustment to maintain peak performance.
Table of Contents
Key Takeaways for Memory Auditing
- Shared Buffers → The primary cache for data blocks; a healthy production system usually aims for a 90%+ hit ratio.
- Work Memory → Allocated per query operation for sorting and hashing; if this is too low, Postgres will “spill” data to slow disk files.
- Double Buffering → Postgres relies on both its own buffers and the OS page cache to keep data accessible in RAM.
- OOM Killer → A Linux kernel mechanism that may terminate your database if the total system memory is exhausted by over-aggressive configuration.
- Memory Contexts → Granular structures used to track allocations within a backend process; these can be dumped to logs for deep debugging.
Step 1: Auditing Core Memory Configuration
Before making changes, you should always audit your current “limits” to ensure you aren’t accidentally over-committing the physical RAM available on your server. The pg_settings view provides the logical limits currently enforced by the cluster.
SQL Command:
SELECT name, setting, unit, short_desc
FROM pg_settings
WHERE name IN ('shared_buffers', 'work_mem', 'maintenance_work_mem', 'max_connections', 'temp_buffers');
Understanding the Parameters:
shared_buffers: The amount of memory for shared data caching.work_mem: The limit for internal sort operations and hash tables.maintenance_work_mem: Resources for maintenance tasks likeVACUUMor building indexes.max_connections: The maximum number of concurrent sessions, each consuming a small amount of overhead.
For a deeper look at how to tune these values, see Essential PostgreSQL configuration parameters for better performance.
Step 2: Analyzing Cache Efficiency (Shared Buffer Hit Ratio)
The most important indicator that you need more memory for your shared_buffers is the hit ratio. This metric tells you how often Postgres finds data in RAM versus having to read it from the disk.
SQL Command:
SELECT
datname,
100 * blks_hit / (blks_hit + blks_read) AS hit_ratio
FROM pg_stat_database
WHERE blks_hit + blks_read > 0;
Judging the Result: For most OLTP (Online Transaction Processing) workloads, a hit ratio below 90% – 95% suggests your Working Set is larger than your cache. In this case, increasing shared_buffers to roughly 25% – 40% of system RAM is recommended.
Step 3: Monitoring Temporary File Spillage
If work_mem is too small, queries involving ORDER BY, DISTINCT, or hash joins will run out of RAM and create temporary files on disk, which is significantly slower.
See also: Mastering the Linux Command Line — Your Complete Free Training Guide
SQL Command:
SELECT
datname,
temp_files AS file_count,
pg_size_pretty(temp_bytes) AS total_size
FROM pg_stat_database;
Judging the Result: If you see frequent increases in temp_files and total_size, your work_mem is insufficient. You can confirm this by running EXPLAIN ANALYZE on a slow query and looking for “External sort Disk”. To identify specific tables causing issues, refer to Finding your tables in PostgreSQL.
Step 4: Checking Background Writer Efficiency
The pg_stat_bgwriter view allows you to observe the pressure on the buffer pool. If the background writer is constantly hitting its limit, the system might be struggling to manage memory pages.
SQL Command:
SELECT
buffers_clean,
maxwritten_clean,
buffers_alloc
FROM pg_stat_bgwriter;
Judging the Result: A high maxwritten_clean count indicates that the writer stopped because it reached its scan limit. This often means the shared buffer pool is too small to handle the current rate of data modification.
Step 5: Granular Backend Memory Diagnosis
If a specific process is consuming excessive RAM, you can force it to dump its internal memory contexts into the server log for analysis.
Command Steps:
- Find the target PID from
pg_stat_activity:SELECT pid, query FROM pg_stat_activity WHERE state = 'active'; - Log the memory contexts (requires superuser rights):
SELECT pg_log_backend_memory_contexts(PID);
Judging the Result: Check the server log for the “Grand total”. If “Used bytes” is near your system limits, that specific query is likely causing memory pressure or may have a memory leak. For more on debugging these scenarios, check Troubleshooting slow queries in PostgreSQL: A step-by-step guide.
Step-by-Step Process to Evaluate Memory Needs
- Run the Hit Ratio Query: Start with Method 2. If the ratio is < 90%, your
shared_buffersneeds to be increased. - Audit Temporary Files: Check Method 3. If spillage is high, increase
work_memfor better performance. - Calculate Peak Consumption: Use the formula:
shared_buffers + (max_connections * work_mem * 2)to estimate the worst-case scenario and avoid the Linux OOM Killer. - Analyze Maintenance Tasks: If indexes are built slowly, consider increasing
maintenance_work_mem. - Review Partitioning Strategy: Large numbers of partitions can spike memory usage because metadata for every partition must be loaded into local backend memory.
- Apply and Monitor: Update
postgresql.conf, reload settings viaSELECT pg_reload_conf();, and re-check hit ratios after 24 hours.
Summary of Memory Adjustment Indicators
| Observation | Indicator/View | Recommended Action |
|---|---|---|
| Hit Ratio < 90% | pg_stat_database.hit_ratio | Increase shared_buffers |
| Frequent Temp Files | pg_stat_database.temp_bytes | Increase work_mem |
| Slow Index Creation | pg_stat_progress_create_index | Increase maintenance_work_mem |
| Too many partitions | pg_class (relkind=’p’) | Consolidate partitions to save local RAM |
| System OOM Killer triggered | Linux dmesg logs | Reduce memory limits or use a connection pool |
FAQs
Why does ps or top show Postgres using more RAM than shared_buffers? Postgres maps the entire shared_buffers pool into every process’s virtual address space. The OS reports this full size for every process, even though it only exists once in physical RAM.
How high can I set shared_buffers? For dedicated servers, 25% – 40% of RAM is the sweet spot. Exceeding 40% is usually counterproductive because it starves the OS page cache.
Can I adjust memory for a specific query without a restart? Yes. You can use SET work_mem = '64MB'; within a specific session to increase resources for a single heavy report.


