I’ve been looking everywhere how I can improve my Teradata views performance by choosing the right primary index in my tables. I have found multiple answers pointing to the same thing, by using this query to see how data is distributed through the AMPs : I get that I need to have an even distribution, but is it better to
Tag: query-optimization
how to improve mysql query speedy with indexes?
I must run this query with MySQL: select requests.id, requests.id_temp, categories.id from opadithree.requests inner join opadi.request_detail_2 on substring(requests.id_sub_temp, 3) = …
What is the best way to get a derived status column based on existing result
I have a table : For a test there can be multiple run/execution. Each run have a result. here for result column, 0 is fail and 1 is pass. I want to query –if all the run PASS for test, the OverallStatus is PASS –If all the run Faile for a test, the OverallStatus is FAIL –If some of them
Optimising postgresql query
I have this query which is rather slow for my liking : Explain analyse output Is there an index I can put on to speed this up (bearing in mind that the values for the order by will be dynamic)? I was thinking a partial index on where bust,figure,age,hair ethnicity is not null and status = ‘online’ but then not
MySQL DATEDIFF function VS compare INTERVAL DAY
What is the difference between DATEDIFF function and subtract INTERVAL DAY directly? SELECT * FROM table WHERE DATEDIFF(CURDATE(), publish_date) = …
Optimizing GROUP BY + COUNT DISTINCT on unnested jsonb column
I am trying to optimize a query in Postgres, without success. Here is my table: I have indexes on id and meta columns: There is 62k rows in this table. The request I’m trying to optimize is this one: In this query, meta is a dict like this one: I want to get the full list of key / value
mysql with date function query running slow
I found something weird while executing query today and i want to know how this happens. Below is my query: this query takes 2-5 seconds while searching for the data. Now i did small change in the query as follow: In this case query takes 2-3 minutes. Here testing_date column data in dateTime format for example : 2020-06-01 00:00:00 Here
SQL query NOT EXIST very slow
I’m trying to optimize an SQL query as it is slow, and gets slower when the query result is high. There are indexes concerned fields and Tables are quite big. Answer As a starter, this condition: Should be rewritten as: This is functionally equivalent, and not using date functions on the column being filtered gives the database a chance to
Get items from next week
I was starting to do a query such as: The problem with this is that items from previous years will also show up here. What would be the best way to do this query — I’m hoping I can still use an index on the query on the date_inserted field, which is why I’m asking this here Answer I would
SQL – Select with group by, Get data from the register with the max(date)
I have two tables in the database: Product and ProductVersion , each product can have n ProductVersions. ProductVersion has this fields ( Id, name, origin, date, provider ) I want a query where I get …