I have a use case that needs to set a value if any rows meet a condition in the GROUP BY function of PostgreSQL. The example is as following. The table contains the following data. I want to run the SQL query to group by based on id. If any rows of the same id have type as ‘A’, then
Tag: postgresql
Aggregation level is off (Postgresql)
I have Order data for 2 customers and their order. And I am trying to calculate what the sum for the price is for every customter for that specific order only for product N Table: This is my query: For some reason I do not understand it gives me several rows per same customer. I am trying to get only
Replace NULL values per partition
I want to fill NULL values in device column for each session_id with an associated non-NULL value. How can I achieve that? Here is the sample data: +————+——-+———+ | session_id | step …
Lookups in for single PostgreSQL table suddenly extremely slow after large update
I have a messages table with a few million records in it. My Rails app includes a query on most pages to count the number of unread messages to show the user. This query – and all queries of the messages table – is unchanged and was working fine until yesterday. Yesterday, I created a new messages column and ran
Postgres stored procedure(function) confusion
I’m pretty new to Postgres and SQL as a whole and could use a bit of help with a function here. I have this table: What i need is for a function to take as input plate, start date and end date and throw an error if the table contains any row with the same plate where the rental period
How do I use a prior query’s result when subtracting an interval in Postgres?
I have the following code: How do I use example_var in in the interval? I’d like to do something like – interval CONCAT(example_var, ‘day’) so that I could change what example_var is equal to and therefore change the length of interval but that isn’t working. Answer If you want to create an interval from a constant value, you can use
Postgresql – access/use joined subquery result in another joined subquery
I have a database with tables for equipment we service (table e, field e_id) contracts on the equipment (table c, fields c_id, e_id, c_start, c_end) maintenance we have performed in the past (table m,…
PostgreSQL: deadlock without a transaction
I have a route (in a node JS app) that inserts and updates some data in a PostgreSQL database (version 13). In pseudo-code, here are all queries that are done in sequential order: On some instances of the app without that much traffic that writes on their own table, I have many deadlocks. I don’t understand why since there is
Assuming there is enough disc space, can I create an index in a live production database without risk of downtime?
In a PostgreSQL database, assuming there is enough disc space, can I create an index in a live production database without risk of downtime? In other words, are there locks or possible crash or data loss possible or something else with the creation of an index. To be more precise, it’s an index on a JSONB sub-property in a 1Gb
SQL query with multiple OR condition inside AND returns null or empty
I’m trying to get those members who are going to match my filter criteria, but I am passing multiple OR condition inside the round brackets, and every condition which is inside the round brackets is, AND with another round bracket; however, the query does not work, and it returns an empty table, but whensoever I run the query with INTERSECT