I am running a Postgres query with a CASE expression in a join condition. The query takes a long time to run. Is there a better way to optimize this query? Code snippet: Answer For proper answer attach full query, table structure (with indexes) and execution plan. Original CASE is quite complicated, but hard to say if it’s responsible for
Tag: postgresql-performance
Performance impact of view on aggregate function vs result set limiting
The problem Using PostgreSQL 13, I ran into a performance issue selecting the highest id from a view that joins two tables, depending on the select statement I execute. Here’s a sample setup: What I found out I’m executing two statements which result in completely different execution plans and runtimes. The following statement executes in less than 100ms. As far
Lookups in for single PostgreSQL table suddenly extremely slow after large update
I have a messages table with a few million records in it. My Rails app includes a query on most pages to count the number of unread messages to show the user. This query – and all queries of the messages table – is unchanged and was working fine until yesterday. Yesterday, I created a new messages column and ran
Optimizing GROUP BY + COUNT DISTINCT on unnested jsonb column
I am trying to optimize a query in Postgres, without success. Here is my table: I have indexes on id and meta columns: There is 62k rows in this table. The request I’m trying to optimize is this one: In this query, meta is a dict like this one: I want to get the full list of key / value
SELECT FOR UPDATE becomes slow with time
We have a table with 1B entries and there are 4 processes which work on this simultaneously. They claim rows with their session ids with 1000 rows at a time and then update the table after 10,000 rows….
How can I efficiently paginate the results of a complex SQL query?
I have a fairly complex SQL query which first fetches some data into a CTE and then performs several self-joins on the CTE in order to compute a value. Here’s an abberivated exmaple, with some complexities of our application simplified: The query is auto-generated and can scale to a complex computation over the values of potentially tens of devices. For
How to create index on records for last 90 days in Postgres Making now() immutable
I have a case where due to speed issues I only want to create the index on records for last 90 days. When I try to create index like this: create index if not exists …
Window functions filter through current row
This is a follow-up to this question, where my query was improved to use window functions instead of aggregates inside a LATERAL join. While the query is now much faster, I’ve found that the results are not correct. I need to perform computations on x year trailing time frames. For example, price_to_maximum_earnings is computed per row by getting max(earnings) over
Storing ‘Rank’ for Contests in Postgres
I’m trying to determine if there a “low cost” optimization for the following query. We’ve implemented a system whereby ‘tickets’ earn ‘points’ and thus can be ranked. In order to support analytical type of queries, we store the rank of every ticket (tickets can be tied) with the ticket. I’ve found that, at scale, updating this rank is very slow.
Optimize performance for queries on recent rows of a large table
I have a large table: CREATE TABLE “orders” ( “id” serial NOT NULL, “person_id” int4, “created” int4, CONSTRAINT “orders_pkey” PRIMARY KEY (“id”) ); 90% of all requests are about orders from the …