The code block shown below should return a two-column DataFrame with columns transactionId and supplier, with combined information from DataFrames itemsDf and transactionsDf. The code
block should merge rows in which column productId of DataFrame transactionsDf matches the value of column itemId in DataFrame itemsDf, but only where column storeId of DataFrame
transactionsDf does not match column itemId of DataFrame itemsDf. Choose the answer that correctly fills the blanks in the code block to accomplish this.
Code block:
transactionsDf.__1__(itemsDf, __2__).__3__(__4__)
This Question: is pretty complex and, in its complexity, is probably above what you would encounter in the exam. However, reading the Question: carefully, you can use your logic skills
to weed out the
wrong answers here.
First, you should examine the join statement which is common to all answers. The first argument of the join() operator (documentation linked below) is the DataFrame to be joined with. Where join is
in gap 3, the first argument of gap 4 should therefore be another DataFrame. For none of the questions where join is in the third gap, this is the case. So you can immediately discard two answers.
For all other answers, join is in gap 1, followed by .(itemsDf, according to the code block. Given how the join() operator is called, there are now three remaining candidates.
Looking further at the join() statement, the second argument (on=) expects 'a string for the join column name, a list of column names, a join expression (Column), or a list of Columns', according to
the documentation. As one answer option includes a list of join expressions (transactionsDf.productId==itemsDf.itemId, transactionsDf.storeId!=itemsDf.itemId) which is unsupported according to the
documentation, we can discard that answer, leaving us with two remaining candidates.
Both candidates have valid syntax, but only one of them fulfills the condition in the Question: 'only where column storeId of DataFrame transactionsDf does not match column itemId of
DataFrame
itemsDf'. So, this one remaining answer option has to be the correct one!
As you can see, although sometimes overwhelming at first, even more complex questions can be figured out by rigorously applying the knowledge you can gain from the documentation during the
exam.
More info: pyspark.sql.DataFrame.join --- PySpark 3.1.2 documentation
Static notebook | Dynamic notebook: See test 3, Question: 47 (Databricks import instructions)
Currently there are no comments in this discussion, be the first to comment!