I want to get conversion rate in postgresql. My data look like below: input: id count type converted 1 30 A true 2 20 A false 3 13 B false 4 7 B true As first step, I would like to get a sum of counts for each type with associated count field. I tried with different variations of SUM()
Tag: postgresql
Select distinct very slow
I have a table where I store rows with external ids. Quite often I need to select latest timestamp for given external ids. Now it is a bottleneck for my app Query: Explain: What could I do to make this query faster? Or probably should I use completely different query? UPDATE: Added new query plan as asked @jahrl. It looks
Handle pg_error on generated columns
I have a table that consists of some PostGIS related data. These data are generated automatically on INSERT or UPDATE. Sometimes the data provided on the polygon column might not fit the generation function and cause an error. I wanted to handle this error and set a default value when it fail. — Last resort options — Creating postgres functions
postgresql combine 2 select from 2 different tables
Hello i need to run a query with 2 select statements, one with an avg calculation and grouped by name and the other select, looking for that name in other table, and get some columns and do a JOIN to merge both and only left one column “name” on it. The first table its this: The second table its like
sql query with avg price per day and group by day
I got this table and this query: I need to get only one price, with avg I think, and grouped by day, something like this Thanks for your help Answer
How do I parameterize table & column in a Postgres-custom-function, selecting PK if value exists, otherwise insert it and return PK anyways?
Trying to do what I specified in the title, I already got the upsert-functionalities working, however when I try to parameterize it, I’m just out of my depth and can’t debug it. My query: now when I try to use the function, I get: What puzzles me about this is the fact that this syntax works fine in an unparameterized
Postgres function to return number of rows deleted per schema
I’m trying to adapt a Postgres stored procedure into a function in order to provide some feedback to the caller. The procedure conditionally deletes rows in specific schemas, and I’d want the function to do the same, but also return the amount of rows that were deleted for each schema. The original stored procedure is: My current transform into a
Ensuring no dupe ids in query return
So for the following schema: You can see that there are 3 activities and each contact has two. What i am searching for is the number of activities per account in the previous two months. So I have the following query This returns: However this is incorrect. There are only 3 activities, its just that each contact sees activity 1.
SQL query to get values by the calendar month
I have the following schema: and i want to get a return object that has the industry id along with the number of items but based on the calendar month for the previous two months, not including the current. So i want the number of items in October and the number in November. What i have, that is not working
Optimize nested inner joins
The following TypeORM generated SQL query takes over 11 sec to complete : Given the following database indexes : It feels like some left joins could be indexed, but I am unsure how to do it the proper way. Besides from indexing, is there anything I could do from TypeORM (or other), to really speed up the request? Here is