You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.explain()
Does this meet the goal?
Shonda
8 months agoMiriam
8 months agoTheron
8 months agoDominga
8 months agoShonda
9 months agoTheron
9 months agoMirta
10 months agoAnabel
9 months agoIvan
9 months agoWilliam
9 months agoBen
9 months agoKathrine
9 months agoStephaine
9 months agoSalena
10 months agoLourdes
10 months agoShaun
10 months ago