I am inserting parent records and child records at the same time in a stored procedure. Rather than have outside code make nested calls to create each parent and then each child of that parent (which is even slower than my current approach), I am giving the sql a comma separated list of child types that I put into a
Tag: query-optimization
Better way to handle adding new table in a query which contains union of tables which have same column names but table name and data are different
I want to rewrite the query to get the data from all the tables, currently I have 12 tables which contains exact same column names but have different table name and content. To get all the records I am currently union them like below. Here I have given example of tables how I am using them, as you can see
Effectively select latest row for each group in a very large table?
I have (for example’s sake) a table Users (user_id, status, timestamp, …). I also have another table SpecialUsers (user_id, …). I need to show each special user’s latest status. The problem is that the Users table is VERY, VERY LARGE (more than 50 Billion rows). Most of the solutions in for instance this question just hang or get “disk full”
what are the alternative approaches to optimize this sql query?
I am just a beginner in sql and i am trying to optimize sql query but couldn’t get any idea yet, so i am sharing the query with you guys if anybody help me out through this will be very appreciated. Tables are already indexed but if you have any composite indexing approach or anything you can share it. Here
Speed up joins on thousands of rows
I have two tables that look something along the lines of: And a query to get data from t1. The query runs fine and takes a few ms to get data from it, although problems start appearing when I use joins. Here’s an example of one of my queries: Which can return a few thousand rows, that might look something
SQL Server Views | Inline View Expansion Guidelines
Background Hello all! I recently learned that in newer versions of SQL Server, the query optimizer can “expand” a SQL view and utilize inline performance benefits. This could have some drastic effects going forward on what kinds of database objects I create and why and when I create them, depending upon when this enhanced performance is achieved and when it
Checking multiple columns for one value with greater than or equal (>=)
Let’s say i’m having a table like this: I wish to check if any of col1,col2,col3,col4 are greater than or equal 10 The idea was smth like Is there any more optimized way? I thought that I could use IN, but as don’t have any clue how to use >= in it. Answer Assuming none of the values are NULL,
How could I speed up this SQL query?
I have this query: With this explain plan: https://explain.depesz.com/s/gJXC I have these indexes: Is it possible to further optimise this? There are only 30 time intervals for this table so I feel like I should be able to get it faster. Answer A main limitation here (at least if you have CPUs to spare) is that GROUPING SETS does not
SQL query with multiple OR condition inside AND returns null or empty
I’m trying to get those members who are going to match my filter criteria, but I am passing multiple OR condition inside the round brackets, and every condition which is inside the round brackets is, AND with another round bracket; however, the query does not work, and it returns an empty table, but whensoever I run the query with INTERSECT
How does i/o on database relate to query type
So I have metrics that look like this. And here is my dumb question… Does the rw i/o directly correlate to rw query? Using the example below, does that mean there was increased read query activity? I don’t have some flags enabled yet to have metrics on the transaction level, but i’ll eventually do so it’s a blackbox for now.