I have two tables with >4million records i need to make a select query with where two columns match bring both tables value on this match and then i will insert that into a 3rth table: This is table A: (bitfinex) This is table B: (Kraken) I need to do a SELECT where timestamp and exchange_pair matches, as you can
Tag: performance
Dramatic decrease in performance for Postgres query on Google SQL compared to my laptop. Why?
A rather complex (depending on standards) query running on a table with about 1M records. Creating some temporary tables and building arrays and jsonb. On localhost I get average of 2.5 seconds. On Google SQL I get 17-19 seconds. Note: Other queries, like simple selects, are faster on server than on local. As it should. I did run vacuum, rebuilded
SQL performance impact on multiple columns in constraint or unique index
I created a table in postgres that includes various columns that based on an import from a csv. Since the csv can be re-uploaded with some changes and I don’t want the re-upload to create duplicates on rows that didn’t change, I added a constraint and unique index with majority of the columns included in them. Is there any performance
Is there a better way to execute this SQL query?
I have written this SQL query to get data for each customer in my database. As you can see, I’m trying to get the total of unpaid orders and the total of orders to my query. My goal at the end is to get only the users with unpaids orders (I think i will have to make it with a
Syntax performance of INNER JOIN
Is the performance of both these examples the same? Example 1: Example 2: I am using example #2 at the moment since I am joining 15+ tables, each table with many unnecessary columns and many rows (1 million+) Answer Oracle is smart enough and does not take all columns from table 1 and join them with all columns from table
SQL Server: Clustered index considerably slower than equivalent non-clustered index
The Setup What I am about to describe is run on the following hardware: Disk: 6x 2TB HDD in RAID5 (w/ 1 redundant drive) CPU: Intel Xeon E5-2640 @ 2.4 GHz, 6 Cores RAM: 64 GB SQL Server Version: SQL Server 2016 Developer Both SQL Server Management Studio (SSMS) and the sql server instance are running on this server. So
Performance issue using IsNull function in the Select statement
I have a financial application. I have ViewHistoricInstrumentValue which has rows like this My views are complicated but the db itself is small (4000 transactions). ViewHistoricInstrumentValue was executed in less than 1 second before I added the next CTE to the view. After that it takes 26s. ActualEvaluationPrice is the price for instrumentX at dateY. If this value is missing
Speeding Up Access 2016 Query
I have a query that contains amongst other things batsmanIDs and League names (extract below). I have put together another query to return all records where a batsman has played in each of two Leagues. The query works but it is very, very slow. There are 48,000 records returned in the first query but when I use that it runs
Using Surrogate Keys in Data Warehouse Pros and Cons
A surrogate key is a mechanism that exists in our books for years and I hate for bringing into discussion again. Everyone is talking about the benefits of using a surrogate key instead of a business key. Even Microsoft Analysis Services Tabular and Microsoft PowerBI Tabular Models are working with the surrogate key. Both platforms mentioned give you the ability
Determining what index to create given a query?
Given a SQL query: How do you determine what index to write to improve the performance of the query? (Assuming every value to the right of the equal is calculated at runtime) Is there a built in command SQL command to suggest the proper index? To me, it gets confusing when there’s multiple JOINS that use fields from both tables.