You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.explain()
Does this meet the goal?
Shonda
9 months agoMiriam
9 months agoTheron
9 months agoDominga
10 months agoShonda
10 months agoTheron
10 months agoMirta
11 months agoAnabel
10 months agoIvan
10 months agoWilliam
10 months agoBen
10 months agoKathrine
10 months agoStephaine
10 months agoSalena
11 months agoLourdes
11 months agoShaun
11 months ago