The following table was created using Parquet / PySpark, and the objective is to aggregate rows where 1 < count < 5
and rows where 2 < count < 6
. Note the row where count
is 4.1 falls in both ranges.
+-----+-----+ |count|value| +-----+-----+ | 1.1| 1| | 1.2| 2| | 4.1| 3| | 5.5| 4| | 5.6| 5| | 5.7| 6| +-----+-----+
Here is code to create and then read the above table as a PySpark DataFrame.
import pandas as pd import pyarrow.parquet as pq import pyarrow as pa from pyspark import SparkContext, SQLContext # create Parquet DataFrame pdf = pd.DataFrame({ 'count': [1.1, 1.2, 4.1, 5.5, 5.6, 5.7], 'value': [1, 2, 3, 4, 5, 6]}) table = pa.Table.from_pandas(pdf) pq.write_to_dataset(table, r'c:/data/data.parquet') # read Parquet DataFrame and create view sc = SparkContext() sql = SQLContext(sc) df = sql.read.parquet(r'c:/data/data.parquet') df.createTempView('data')
The operation can use two separate queries.
q1 = sql.sql(""" SELECT AVG(value) AS va FROM data WHERE count > 1 AND count < 5 """) +---+ | va| +---+ |2.0| +---+
and, similarly
q2 = sql.sql(""" SELECT AVG(value) as va FROM data WHERE count > 2 AND count < 6 """) +---+ | va| +---+ |4.5| +---+
However I want to do this in one efficient query.
Here is an approach that does not work because the row where count
is 4.1 is included in only one group.
qc = sql.sql(""" SELECT AVG(value) AS va, (CASE WHEN count > 1 AND count < 5 THEN 1 WHEN count > 2 AND count < 6 THEN 2 ELSE 0 END) AS id FROM data GROUP BY id """)
The above query produces
+---+---+ | va| id| +---+---+ |2.0| 1| |5.0| 2| +---+---+
To be clear the desired result is something more like
+---+---+ | va| id| +---+---+ |2.0| 1| |4.5| 2| +---+---+
Advertisement
Answer
The simplest method is probably union all
:
SELECT 1, AVG(value) AS va FROM data WHERE count > 1 AND count < 5 UNION ALL SELECT 2, AVG(value) as va FROM data WHERE count > 2 AND count < 6;
You can also phrase this as:
select r.id, avg(d.value) from data d join (select 1 as lo, 5 as hi, 1 as id union all select 2 as lo, 6 as hi, 2 as id ) r on d.count > r.lo and d.count < r.hi group by r.id;