You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.explain()
Does this meet the goal?
Shonda
5 months agoMiriam
6 months agoTheron
6 months agoDominga
6 months agoShonda
6 months agoTheron
6 months agoMirta
7 months agoAnabel
6 months agoIvan
6 months agoWilliam
6 months agoBen
6 months agoKathrine
6 months agoStephaine
6 months agoSalena
7 months agoLourdes
7 months agoShaun
7 months ago