Indexes are the primary tools used to enhance database performance, allowing the server to retrieve specific rows much faster than scanning an entire table.
However, simply creating an index does not guarantee that PostgreSQL will use it. Many developers are puzzled when a query performs a slow sequential scan despite a perfectly valid index being available on the target column.
The PostgreSQL query planner makes cost-based decisions, choosing the path it estimates will be the most efficient. While an index walk is often faster, it adds its own overhead, and there are several technical scenarios where the database engine intentionally ignores your index in favor of a sequential scan. This guide explains the core reasons why your indexes might be bypassed and how to diagnose these issues using built-in tools.
Table of Contents
Key Takeaways for Index Optimization
- Query Planner → The engine that evaluates multiple execution strategies and selects the one with the lowest estimated cost.
- ANALYZE command → A vital maintenance task that updates the statistics (such as row counts and value distribution) the planner uses to make decisions.
- Table Selectivity → If a query returns a large fraction of the table (typically more than a few percent), a sequential scan is often faster than random-access index lookups.
- Operator Classes → Indexes are only usable if the operator in your WHERE clause (like
<orLIKE) matches the algorithm supported by the index type. - Functional Limitations → Standard indexes cannot be used if the column is wrapped in a function or expression unless a specific “index on expression” has been created.
Method 1: Dealing with Tiny Tables and Disk Pages
PostgreSQL is highly aware of the physical layout of your data on disk. If a table is very small—specifically if it fits within a single disk page (typically 8kB)—the planner will almost always prefer a sequential scan.
Since the system has to read at least one disk page to process the query anyway, there is no performance benefit to reading an additional index page first. This is why you may see indexes ignored in development environments with toy-sized datasets, even if those same indexes are critical in production.
Method 2: Correcting Outdated or Missing Statistics
The most common “invisible” cause for ignored indexes is stale statistics. The planner relies on the pg_statistic system catalog to estimate how many rows a query will return. If these figures are inaccurate because the table was recently loaded with data or modified, the planner may assume an index scan is too expensive.
How to Fix It: Run the ANALYZE command on your table. This performs a random sampling of rows to update the planner’s internal distribution maps.
ANALYZE your_table_name;
For more information on ensuring your tables are correctly configured, see How to create schemas and tables in PostgreSQL.
Method 3: Matching Operators and Collations
Every index type (B-tree, GIN, GiST, etc.) only supports specific operators. For instance, a B-tree index is perfect for equality (=) and range (<, >) queries, but it cannot handle a LIKE search that starts with a wildcard (e.g., WHERE col LIKE '%term').
Furthermore, if your database uses a non-C locale, a standard B-tree index may not support LIKE at all unless you explicitly use a special operator class like varchar_pattern_ops. If your query uses a different collation than the index, the planner will also be forced to fall back to a sequential scan.
See also: Mastering the Linux Command Line — Your Complete Free Training Guide
Step-by-Step Process to Diagnose Index Usage
- Run EXPLAIN: Use the
EXPLAINcommand to see the execution plan. Look for “Seq Scan” where you expect an “Index Scan”. - Use EXPLAIN ANALYZE: Run
EXPLAIN ANALYZEto get the actual runtimes and row counts. This helps determine if the planner’s estimates match reality. - Update Table Stats: Execute
ANALYZEto ensure the planner is working with the most recent data distribution figures. - Check Selectivity: Evaluate if your query is retrieving too much data. If you are selecting 7,000 out of 10,000 rows, an index is unlikely to be used.
- Audit the WHERE Clause: Ensure you aren’t using functions on the indexed column (e.g.,
WHERE lower(name) = 'bob') without an expression index. - Force for Testing: Temporarily disable sequential scans to see if the planner can use the index and why it thinks it is more expensive.
SET enable_seqscan = off;
Summary Table: Why Indexes are Ignored
| Reason | Technical Details | Solution |
|---|---|---|
| Small Dataset | Table fits in one or very few disk pages. | No action needed; normal behavior. |
| Low Selectivity | Query retrieves a high percentage of total rows. | Use LIMIT or more restrictive filters. |
| Stale Stats | pg_statistic is outdated after large writes. | Run ANALYZE. |
| Function Use | Column is wrapped in a function in the WHERE clause. | Create an Index on Expression. |
| Operator Conflict | Index doesn’t support the specific comparison operator. | Use correct Operator Class (e.g., text_pattern_ops). |
FAQs
Why does \dt show my index exists, but Postgres still scans the whole table? The \dt meta-command (or \di) only proves the index is there; it doesn’t mean the index is efficient for your specific query. The planner may decide the index is slower due to the high cost of random-access “Bitmap Heap Scans”.
Can I force PostgreSQL to use an index? PostgreSQL does not have “index hints” like some other databases. You can discourage other plan types by setting enable_seqscan = off, but the planner will still choose the cheapest remaining option.
How do I check if my indexes are being used over time? You can query the pg_stat_user_indexes system view. Look at the idx_scan column to see how many times an index has been utilized since the statistics were last reset.




