You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.show()
Does this meet the goal?
The df.show() method also does not meet the goal. It is used to show the contents of the DataFrame, not to compute statistical functions. Reference = The usage of the show() function is documented in the PySpark API documentation.
Sanda
3 months agoGertude
3 months agoLatonia
3 months agoLauna
4 months agoSalome
4 months agoAdell
4 months agoRosamond
4 months agoHannah
4 months agoGlennis
5 months agoFrancine
5 months agoRanee
5 months agoTruman
5 months agoStefany
5 months agoJovita
5 months agoMozelle
5 months agoVeronika
5 months agoJanine
2 years agoLucina
1 year agoClaudio
2 years agoMy
2 years agoJame
2 years agoRima
2 years agoMitsue
2 years agoLeatha
1 year agoColene
1 year agoShizue
2 years agoRolland
2 years agoStevie
2 years agoRegenia
2 years agoStevie
2 years ago