Here is my JSON field value. which is stored in the PostgreSQL table. I want to search and update the specific user_name in the user key whose name is Devang to Dev using Django JSONField for example I have tried the RAWQuery for the find. This is the query. It will return like this I have also tried JSON_SET to
Tag: postgresql
SQL count longest consecutive days with dates table
I have a login table with columns UserID and login_date like below (but with thousands of entries). How would I find the longest streak of consecutive log-ins for each user and what the start and end days were for those streaks? Edit: Using Postgres. Removed question about streaks only counting M-F and not weekends (Do I make a separate post
Postgresql – Using CASE and PARTITION BY clause to create effective and thrudates
I’m new to PostgreSQL. I have been working with SQL Server, so I have limited experience with PostgreSQL. I’m trying to convert some SQL Server queries for PostgreSQL and ran into the following issue. Suppose I have the following table key date value A 2000-01-01 1 A 2001-01-01 2 A 2002-01-01 3 B 2001-01-01 4 B 2002-01-01 5 B 2003-01-01
how to iterate through geojson elements
i want to execute the code in this question https://gis.stackexchange.com/questions/142391/storing-geojson-featurecollection-to-postgresql-with-postgis/142479#142479 but when i run the app i receive the following error: please let me know how to fix it. code: attempts: Answer From the database perspective the query works just fine, but the issue seems to be in the query building. Your query has a JSON document containing multiple “
Why two logically same conditions in postgresql case clause have different behavior?
I have two queries is postgresql: As you might have noticed result of the first condition in both queries is true, but result of first query is null and result of the second one is ERROR: division by zero What is happening in here? Is there any optimization in order of evaluation occurring? If yes, is there any why to
Postgresql – Looping through array_agg
I have a table from which I need to calculate the number of times intent_level changes for each id. Sample Table format : I couldn’t figure out how to loop through array_agg and compare consecutive elements. Below I tried using array_agg, but then I don’t know how to loop through it and compare consecutive elements. which gives output : Desired
get most frequent values in every month in 2021
Trying to get the most frequent values in every month from tables inspection table : can be ignored – > FOREIGN key (lno) REFERENCES restaurant) data : query : output: month id 3 333 4 222 5 222 6 333 expected output – month id 3 333 4 222 5 111 6 222 6 333 Answer IMHO you don’t need
Field value counts with aggregate conditions
Suppose I have the following applicant data for jobs in a company: The budget is 40000 and the preference is to hire senior managers. What PostgreSQL constructs do I use to get the following result as far as the number of hires are concerned. Any directions would be appreciated. Here is a starting sqlfiddle: http://sqlfiddle.com/#!17/2cef4/1 Answer Using PostgreSQL filters and
SQL query not returning all possible rows
I am writing a simple SQL query to get the latest records for every customerid. My SQL query – Sample data: customerid device_count date_time A 3573 2021-07-26 02:15:09-05:00 A 4 2021-07-26 02:15:13-05:00 A 16988 2021-07-26 02:15:13-05:00 A 20696 2021-07-26 02:15:13-05:00 A 24655 2021-07-26 02:15:13-05:00 A 10000 2021-07-25 02:15:13-05:00 A 2000 2021-07-25 02:15:13-05:00 What I need is: customerid device_count date_time A
Take last hour and group it by 1 minute
I was wondering if you can help me write a query that should just SELECT count(*) but only include data from last hour and group it by minute. So I have a table that has a createdts so I have the date there. I just want to see how many entries I have in the last hour, but group COUNT(*)