The code block shown below should show information about the data type that column storeId of DataFrame transactionsDf contains. Choose the answer that correctly fills the blanks in the code
block to accomplish this.
Code block:
transactionsDf.__1__(__2__).__3__
Correct code block:
transactionsDf.select('storeId').printSchema()
The difficulty of this Question: is that it is hard to solve with the stepwise first-to-last-gap approach that has worked well for similar questions, since the answer options are so different from
one
another. Instead, you might want to eliminate answers by looking for patterns of frequently wrong answers.
A first pattern that you may recognize by now is that column names are not expressed in quotes. For this reason, the answer that includes storeId should be eliminated.
By now, you may have understood that the DataFrame.limit() is useful for returning a specified amount of rows. It has nothing to do with specific columns. For this reason, the answer that resolves to
limit('storeId') can be eliminated.
Given that we are interested in information about the data type, you should Question: whether the answer that resolves to limit(1).columns provides you with this information. While
DataFrame.columns is a valid call, it will only report back column names, but not column types. So, you can eliminate this option.
The two remaining options either use the printSchema() or print_schema() command. You may remember that DataFrame.printSchema() is the only valid command of the two. The select('storeId')
part just returns the storeId column of transactionsDf - this works here, since we are only interested in that column's type anyways.
More info: pyspark.sql.DataFrame.printSchema --- PySpark 3.1.2 documentation
Static notebook | Dynamic notebook: See test 3, Question: 57 (Databricks import instructions)
Currently there are no comments in this discussion, be the first to comment!