Skip to content
Advertisement

PostgreSQL: Bad query performance caused by unused index?

we have a query to get all jobs with changes inside a certain time period. Depending on the selected period the performance goes from <100 milliseconds for a day to ~7 seconds for a week.

I found that if the time period is small enough, the index is used and the query is fast. If the period gets too big, the index is not used and the query becomes slow.

The server runs with version 9.2.

Why is this caused and how to fix this issue?

Create script:

Fast query:

Slow query:

Thanks in advance.

Advertisement

Answer

The slow query processes way more data (100000 vs. 2500 rows from "Timestamp"), so it is not surprising that it is slower.

You can force PostgreSQL to use a nested loop join with the slow query as well:

Try that and see if PostgreSQL was right and the hash join is really slower.

I suspect that PostgreSQL is doing the right thing here, and the best way for you to improve the performance would be to increase work_mem.

If you are willing to add another index and to VACUUM "Changes" often enough, you could get even better performance with an index-only scan:

On old versions of PostgreSQL that would be

Then you best drop the now unnecessary index "index_Changes_idTimestamp".

By the way, you are making your life unnecessarily hard by using camel case and quoted identifiers.

User contributions licensed under: CC BY-SA
7 People found this is helpful
Advertisement