Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 1 year ago. Improve this question Sample Example – SELECT * FROM table_name FORCE INDEX (index_list) WHERE condition; Without using force index, mysql’s query optimiser decides
Tag: query-optimization
MySQL – Performance issue while joining most recent
I have two tables, markets (27 records) and histories (~1.75M records, ~67K per market). I need to get every market with its most recent histories record. The solutions I tried work but are incredibly slow. Tables DDL What I tried 1 – Uncorrelated subquery I started with this solution since I used it other times, it takes ~7.5s: EXPLAIN result:
SQL Server: Clustered index considerably slower than equivalent non-clustered index
The Setup What I am about to describe is run on the following hardware: Disk: 6x 2TB HDD in RAID5 (w/ 1 redundant drive) CPU: Intel Xeon E5-2640 @ 2.4 GHz, 6 Cores RAM: 64 GB SQL Server Version: SQL Server 2016 Developer Both SQL Server Management Studio (SSMS) and the sql server instance are running on this server. So
Query optimization for multiple inner joins and sub-query
I need help regarding query optimization of the below query. Since there are duplicate joins in the main query and sub query, is there any way to remove those joins in the subquery? Answer Since, as you clarified, your sub-query is almost identical to your main query you might be able to use the window function RANK as a filter
postgresql jsonb case insensitive query with index
I was looking for a way to make a case insensitive query, and I found it here (postgresql jsonb case insensitive query), more precisely with a query like this : select … where upper(data::text)::jsonb @> upper(‘[{“city”:”New York”}]’)::jsonb However, I can’t seem to find enough information about how to create an index to be used by such a query. works perfectly
How to speed up sql query execution?
The task is to execute the sql query: select * from x where user in (select user from x where id = ‘1’) The subquery contains about 1000 id so it takes a long time. Maybe this question was already …
SQL IN in WHERE. How smart is the optimizer?
I have the following query to execute: UPDATE scenario_group SET project_id = @projectId WHERE scenario_group_id = @scenarioGroupId AND @projectId IN (SELECT …
Indexing an SQL table by datetime that is scaling
I have a large table that gets anywhere from 1-3 new entries per minute. I need to be able to find records at specific times which I can do by using a SELECT statement but it’s incredibly slow. Lets say the table looks like this: I’m trying to get data like this: I also need to get data like this:
Optimal join from a column to the concatenation of those columns?
I have a table TableLHS with a column ObjInfo: which I would like to join to TableRHS, with columns: So the join here involves 1) dropping the leading zeroes from ObjNumber, and 2) concatenating the three columns in TableRHS together. My best shot at the join is: My current performance could use some improvement. Is there a smarter way of
Using two single-column indexes in where and orderby clause
I have googled a lot and couldn’t find a clear answer to my question assume we have this query SELECT * WHERE user_id = x ORDER BY date_created If we have a single column index on user_id and another …