Skip to content
Advertisement

Tag: performance

Fastest way to determine if record exists

As the title suggests… I’m trying to figure out the fastest way with the least overhead to determine if a record exists in a table or not. Sample query: Say the ? is swapped with ‘TB100’… both the first and second queries will return the exact same result (say… 1 for this conversation). The last query will return ‘TB100’ as

Normalize array subscripts so they start with 1

PostgreSQL can work with array subscripts starting anywhere. Consider this example that creates an array with 3 elements with subscripts from 5 to 7: Returns: We get the first element at subscript 5: I want to normalize 1-dimensional arrays to start with array subscript 1. The best I could come up with: The same, easier the read: Do you know

SQL performance MAX()

Just got a small question. When trying to get a single max-Value of a table. Which one is better? or I’m using Microsoft SQL Server 2012 Answer There will be no difference as you can test yourself by inspecting the execution plans. If id is the clustered index, you should see an ordered clustered index scan; if it is not

SQL count(*) performance

I have a SQL table BookChapters with over 20 millions rows. It has a clustered primary key (bookChapterID) and doesn’t have any other keys or indexes. It takes miliseconds to run the following query However, it takes over 10 minutes when I change it like so or Why is that? How can I get select count(*) to execute faster? Answer

Index for nullable column

I have an index on a nullable column and I want to select all it’s values like this: In the explain plan I see a FULL TABLE SCAN (even a hint didn’t help) Does use the index… I googled and found out there are no null entries in indexes, thus the first query can’t use the index. My question is

Advertisement