You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.show()
Does this meet the goal?
The df.show() method also does not meet the goal. It is used to show the contents of the DataFrame, not to compute statistical functions. Reference = The usage of the show() function is documented in the PySpark API documentation.
Janine
4 months agoLucina
3 months agoClaudio
3 months agoMy
4 months agoJame
4 months agoRima
4 months agoMitsue
4 months agoLeatha
3 months agoColene
3 months agoShizue
3 months agoRolland
5 months agoStevie
5 months agoRegenia
5 months agoStevie
5 months ago