I have a table like this,I am doing normal pivoting,which is not giving the desired result. I want to get it in this way: I tried doing it like this : But its not giving as expected. Can any one suggest what can be done for the desired result ? Normal SQL / DF solution both will be helpful. Answer
Tag: apache-spark-sql
doing some of columns based on some complex logic in pyspark
Here is the question in the image attached: Table: So result column is calculated based on the below rules: If col3 >0 , then result=col1+col2 If col 3=0, then result= sum (col2) till col3 >0 + col1(where col3>0) for example for row =3, the result=60+70+80+30(from col1 from row 5 because here col3>0)=240 for row=4, the result=70+80+30(from col1 from row 5
pylint equivalent for SQL?
python having pylint scala having Scalastyle I searched around but didn’t find a style checker for SQL. Does it exist? Thank you. Answer You don’t require any error checker for Sql, as Sql is not a programming language. They IDE you use will help you to understand the issue in the query and can be formatted accordingly. Please choose appropriate
Row comparison in table via SQL
I have a table which is structured like the following: Is there a way to build a SQL query which – per each ID – looks for the Day in which Value1 OR Value2 has changed? The result I would like to achieve would be this: In which I can keep track of those changes per ID per Day. Edit:
Spark SQL INSERTION TECHNIQUE for Result got from calculation
or insertion I’m using below code- Entire code here for better understanding- This code gives error while inserting. Any help would be great. Error: Answer This worked- The arrangement. Just (‘”____”‘) is all I wanted to know.
Spark SQL to join two results from same table
I have a table called “Sold_Items” like below. And I want to use Spark SQL to get the net sell volumes for each participant. Item Buyer Seller Qty ———————————- A …
How to cast from double to int in from_json Spark SQL (NULL output)
I have a table with a JSON string When running this Spark SQL query: select from_json(‘[{“column_1″:”hola”, “some_number”:1.0}]’, ‘array
Query the Column based on the value of another column
Hi I have a table structure like this id-rank id name value rank 1-1 1 abc somevalue1 1 1-2 1 abc somevalue2 2 1-3 1 abc somevalue3 3 2-1 2 abc somevalue4 1 3-1 3 …
SQL – Get the antepenultimate (before previous group/phase)
I have the following table and I’d like to get the antepenultimate or the value before the previous value. I already have group, value and prev_value, dateint… I’m trying to derive prev_prev_value This is the table with test data (as a CTE) Any ideas on how to derive prev_prev_value I’d like to use window functions and avoid joins. I’ve tried
how to pass value from one dataframe to another dataframe?
I have to pass the the C_ID value to where condition in below data frame as parameter. Any suggestions how I can do this ? i should not use subquery concept as data is in millions and multiple tables are there in joins,here i have mentioned sample query. Answer Store sql result into a variable using mkString and then use