I would like to include null values in an Apache Spark join. Spark doesn’t include rows with null by default.
Here is the default Spark behavior.
val numbersDf = Seq( ("123"), ("456"), (null), ("") ).toDF("numbers") val lettersDf = Seq( ("123", "abc"), ("456", "def"), (null, "zzz"), ("", "hhh") ).toDF("numbers", "letters") val joinedDf = numbersDf.join(lettersDf, Seq("numbers"))
Here is the output of joinedDf.show()
:
+-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | | hhh| +-------+-------+
This is the output I would like:
+-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | | hhh| | null| zzz| +-------+-------+
Advertisement
Answer
Spark provides a special NULL
safe equality operator:
numbersDf .join(lettersDf, numbersDf("numbers") <=> lettersDf("numbers")) .drop(lettersDf("numbers"))
+-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | null| zzz| | | hhh| +-------+-------+
Be careful not to use it with Spark 1.5 or earlier. Prior to Spark 1.6 it required a Cartesian product (SPARK-11111 – Fast null-safe join).
In Spark 2.3.0 or later you can use Column.eqNullSafe
in PySpark:
numbers_df = sc.parallelize([ ("123", ), ("456", ), (None, ), ("", ) ]).toDF(["numbers"]) letters_df = sc.parallelize([ ("123", "abc"), ("456", "def"), (None, "zzz"), ("", "hhh") ]).toDF(["numbers", "letters"]) numbers_df.join(letters_df, numbers_df.numbers.eqNullSafe(letters_df.numbers))
+-------+-------+-------+ |numbers|numbers|letters| +-------+-------+-------+ | 456| 456| def| | null| null| zzz| | | | hhh| | 123| 123| abc| +-------+-------+-------+
and %<=>%
in SparkR:
numbers_df <- createDataFrame(data.frame(numbers = c("123", "456", NA, ""))) letters_df <- createDataFrame(data.frame( numbers = c("123", "456", NA, ""), letters = c("abc", "def", "zzz", "hhh") )) head(join(numbers_df, letters_df, numbers_df$numbers %<=>% letters_df$numbers))
numbers numbers letters 1 456 456 def 2 <NA> <NA> zzz 3 hhh 4 123 123 abc
With SQL (Spark 2.2.0+) you can use IS NOT DISTINCT FROM
:
SELECT * FROM numbers JOIN letters ON numbers.numbers IS NOT DISTINCT FROM letters.numbers
This is can be used with DataFrame
API as well:
numbersDf.alias("numbers") .join(lettersDf.alias("letters")) .where("numbers.numbers IS NOT DISTINCT FROM letters.numbers")