You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.show()
Does this meet the goal?
The df.show() method also does not meet the goal. It is used to show the contents of the DataFrame, not to compute statistical functions. Reference = The usage of the show() function is documented in the PySpark API documentation.
Launa
7 days agoSalome
13 days agoAdell
19 days agoRosamond
24 days agoHannah
29 days agoGlennis
1 month agoFrancine
1 month agoRanee
1 month agoTruman
1 month agoStefany
1 month agoJovita
1 month agoMozelle
2 months agoVeronika
2 months agoJanine
1 year agoLucina
1 year agoClaudio
1 year agoMy
1 year agoJame
1 year agoRima
1 year agoMitsue
1 year agoLeatha
1 year agoColene
1 year agoShizue
1 year agoRolland
1 year agoStevie
1 year agoRegenia
1 year agoStevie
1 year ago