The code block displayed below contains multiple errors. The code block should return a DataFrame that contains only columns transactionId, predError, value and storeId of DataFrame
transactionsDf. Find the errors.
Code block:
transactionsDf.select([col(productId), col(f)])
Sample of transactionsDf:
1. +-------------+---------+-----+-------+---------+----+
2. |transactionId|predError|value|storeId|productId| f|
3. +-------------+---------+-----+-------+---------+----+
4. | 1| 3| 4| 25| 1|null|
5. | 2| 6| 7| 2| 2|null|
6. | 3| 3| null| 25| 3|null|
7. +-------------+---------+-----+-------+---------+----+
Correct code block: transactionsDf.drop('productId', 'f')
This Question: requires a lot of thinking to get right. For solving it, you may take advantage of the digital notepad that is provided to you during the test. You have probably seen that the code
block
includes multiple errors. In the test, you are usually confronted with a code block that only contains a single error. However, since you are practicing here, this challenging multi-error QUESTION
NO: will
make it easier for you to deal with single-error questions in the real exam.
The select operator should be replaced by a drop operator, the column names should be listed directly as arguments to the operator and not as a list, and all column names should be expressed as
strings without being wrapped in a col() operator.
Correct! Here, you need to figure out the many, many things that are wrong with the initial code block. While the Question: can be solved by using a select statement, a drop statement, given
the
answer options, is the correct one. Then, you can read in the documentation that drop does not take a list as an argument, but just the column names that should be dropped. Finally, the column
names should be expressed as strings and not as Python variable names as in the original code block.
The column names should be listed directly as arguments to the operator and not as a list.
Incorrect. While this is a good first step and part of the correct solution (see above), this modification is insufficient to solve the question.
The column names should be listed directly as arguments to the operator and not as a list and following the pattern of how column names are expressed in the code block, columns productId and f
should be replaced by transactionId, predError, value and storeId.
Wrong. If you use the same pattern as in the original code block (col(productId), col(f)), you are still making a mistake. col(productId) will trigger Python to search for the content of a variable named
productId instead of telling Spark to use the column productId - for that, you need to express it as a string.
The select operator should be replaced by a drop operator, the column names should be listed directly as arguments to the operator and not as a list, and all col() operators should be removed.
No. This still leaves you with Python trying to interpret the column names as Python variables (see above).
The select operator should be replaced by a drop operator.
Wrong, this is not enough to solve the question. If you do this, you will still face problems since you are passing a Python list to drop and the column names are still interpreted as Python variables
(see above).
More info: pyspark.sql.DataFrame.drop --- PySpark 3.1.2 documentation
Static notebook | Dynamic notebook: See test 3, Question: 30 (Databricks import instructions)
Currently there are no comments in this discussion, be the first to comment!