Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam DP-600 Topic 3 Question 11 Discussion

Actual exam question for Microsoft's DP-600 exam
Question #: 11
Topic #: 3
[All DP-600 Questions]

You have a Fabric tenant that contains a new semantic model in OneLake.

You use a Fabric notebook to read the data into a Spark DataFrame.

You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.

Solution: You use the following PySpark expression:

df.explain()

Does this meet the goal?

Show Suggested Answer Hide Answer
Suggested Answer: B

The df.explain() method does not meet the goal of evaluating data to calculate statistical functions. It is used to display the physical plan that Spark will execute. Reference = The correct usage of the explain() function can be found in the PySpark documentation.


Contribute your Thoughts:

Rima
3 months ago
Makes sense. So, B it is. Glad to clear that up!
upvoted 0 times
...
Tamar
3 months ago
Exactly, you need to use functions like describe() for those statistics.
upvoted 0 times
...
Bulah
3 months ago
Yeah, I was confused about that. It doesn't calculate min, max or any of that.
upvoted 0 times
...
Marget
3 months ago
B for no, right? Because explain() just gives the logical plan, not the stats.
upvoted 0 times
...
Penney
4 months ago
Oh, the one with df.explain()? I think the answer is B.
upvoted 0 times
...
Bulah
4 months ago
I just got to the question about the PySpark expression in the exam.
upvoted 0 times
...

Save Cancel