I have the following table: I want to get maximum value along with Date,ReceivedTime. Expected Result: Answer This answer assumes that, in the event of two or more records being tied on a given day for the same highest value, you want to retain the single record with the most recent ReceivedTime. We can use DISTINCT ON here:
Tag: sql
check if record exists in previous date and not in current date in the same table and return count of records matched and unmatched
I am trying to get the count of records that were there in one date and also in another date. What would be the most efficient way? id date AB 6/11/2021 AB 6/11/2021 BC 6/04/2021 BC 6/04/2021 AB 6/04/2021 AB 6/04/2021 This should return True =2 (Ab is present in 04/21) and False=2 Answer Per ID if it’s in more
BigQuery – SQL UPDATE and JOIN
I have two tables. Table1 = dalio which is an event list with select customers. Table2 = master_list which is a master customer list from all past events. dalio has an “id” column that needs to be filled in with customer numbers, which can be pulled from master_list column called “customer_no”. All rows in the “id” column are currently blank.
In MySQL, when ordering by more than one conditions, how do I treat “false” the same way as “null”?
I have a select statement where I want to order by a boolean column first, then order by a date column, and the goal is to put records with boolean = true at the top, then for the records with boolean = false OR boolean is NULL, they should be sorted by the date column`. The statement is like However,
Oracle SQL find columns with different values
I have two tables A and B both with some millions rows and around one hundred columns. I want to find which columns have different observations without the need of listing the names of all the columns. For example, suppose column ID is the primary key in both tables. And that table A is while table B is The result
Find total IDs between two dates that satisfies a condition
I have a dataset PosNeg like this. I need to find count of ID’s who have a pattern like this- P N P P or N P N N P N – that is having at least one N (negative) between two P’s(positive). If this pattern occurs at least once, then count that ID. Date is always in ascending order.
Summation of 2 floating point values gives incorrect result with higher precision
The sum of 2 floating values in postgres gives the result with higher precision. Expected: Actual: The result is having higher precision than expected. The same operation with different set of inputs behaves differently. Here I have reduced the problem statement by finding the 2 numbers for which the issue exists. The actual problem is when I do this on
Assistance with PERCENTILE_CONT function and GROUP By error
All, I am having problems with the below query. I am trying to get stat data from our database for the last 3 years but I keep getting the error message: ***Column ‘OC_VDATA.DATA1’ is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.*** I know it has something to
Python Script to run SQL query UPDATE statement to loop through each row in result set and update columns
Newby working on my first project. Sorry for this explanation. I have 2 x tables: t1: master table with single rows (unique project-id) and 3 status fields, s1,s2,s3 t2: list table with repeating project_id’s with 3 status fields s1,s2,s3 (and other data not relevant here). The value in the s1-3 fields is either true(1) or false(0) table1: project_id, status1, status2,
How to handle NULL values in WHERE clause and change target column based upon its encounter
I need the WHERE clause to change what column it is evaluating when NULL is encountered. For instance I’m trying to do something like this: Is something like this possible? One of the 3 cust_id’s will not be NULL. Answer That seems like less than optimal table design, but isn’t a simple COALESCE where you’re after?