Given below table in Postgres: id some_col 1 a 1 b 2 a 3 a I want to get output as id and true (if at least one row with that id is present in the table) or false (if no rows with that id are found in the table). For example where id in (1, 2, 3, 4, 5):
Tag: query-optimization
Select distinct very slow
I have a table where I store rows with external ids. Quite often I need to select latest timestamp for given external ids. Now it is a bottleneck for my app Query: Explain: What could I do to make this query faster? Or probably should I use completely different query? UPDATE: Added new query plan as asked @jahrl. It looks
How to optimize datetime comparisons in mysql in where clause
CONTEXT I have a large table full of “documents” that are updated by outside sources. When I notice the updates are more recent than my last touchpoint I need to address these documents. I’m having some serious performance issues though. EXAMPLE CODE gets me back 212,494,397 documents in 1 min 15.24 sec. which is apx the actual query gets me
Calculating totals and percentages for each row, in a time boxed window, for a relation
Ok, so I’ve got two tables: jobs, and job runs. I’m using Postgres. I want to look at 2 periods. 7 days ago until now, and 14 days ago to 7 days ago. For each job, I want a total of the number of runs, and a percentage of successful and unsuccessful runs for each period. I’ve cooked up this
How to improve performance of multi-database queries in SQL Server where one database is synchronized and the other is not
I have two databases. One I’ll call a is a read-only synchronized database that is part of an availability group and the other is a plain ol database I’ll call b on the same server as the synchronized database. I need to write views in b that read from a, but they perform very poorly in this environment. For example,
Slow Querying DB
I am currently optimising a system with many connected tables. The part that I am working on right now is displaying table orders. The problem is that in this table there are also many relations (around 10) which I am querying. The problem itself is in querying that many relations. I have been using Eloquent + with() methods for eager
Optimizing SQL query – finding a group with in a group
I have a working query and looking for ideas to optimize it. Query explanation: Within each ID group (visitor_id), look for row where c_id != 0. From that row, show all consecutive rows within that ID group. Answer so you have a common sub expression so that can be moved to a CTE and run just once. like but the
How can I replace this correlated subquery within a function call?
Given the following tables buckets points And the following query Output How can I remove the correlated sub-query to improve the performance? Currently ~280,000 points * ~650 buckets = ~180,000,000 loops = very slow! Basically I want to remove the correlated sub-query and apply the width_bucket function only once per unique metric_id in buckets, so that the performance is improved
MariaDB Created view takes too long
I have a problem. I have a table with 6 million records in it. Every record has a column dateTime, and for my code I need the most recent 16 records in ascending order. This took too long to query directly out of the original table, so I created a view using the following query: This means that the view
How to improve a mysql COUNT query for speed?
How can I improve this query for speed? at the moment it’s taking a couple of seconds only to load the php file where the query is without even querying anything. I’ve an index on skillsTrends, jobtitle and industry. Collation: utf8mb4_unicode_ci Number of records < 1,000,000. Answer Try this covering index. It should help the performance of your query. And,