A machine learning engineer is using the following code block to scale the inference of a single-node model on a Spark DataFrame with one million records:

Assuming the default Spark configuration is in place, which of the following is a benefit of using an Iterator?
Using an iterator in the pandas_udf ensures that the model only needs to be loaded once per executor rather than once per batch. This approach reduces the overhead associated with repeatedly loading the model during the inference process, leading to more efficient and faster predictions. The data will be distributed across multiple executors, but each executor will load the model only once, optimizing the inference process.
Databricks documentation on pandas UDFs: Pandas UDFs
Johnson
3 months agoLaticia
3 months agoPortia
3 months agoKathrine
4 months agoJackie
4 months agoSkye
4 months agoAnna
4 months agoGregoria
4 months agoLatrice
5 months agoMyra
5 months agoMiriam
5 months agoBette
5 months agoLindsay
5 months agoDaren
5 months agoLing
5 months agoMaxima
5 months agoNu
5 months agoShizue
5 months agoDerrick
5 months agoGlendora
5 months agoMammie
2 years agoXochitl
2 years agoJospeh
2 years agoBernadine
2 years agoArthur
2 years agoIzetta
2 years agoDylan
2 years agoCallie
2 years agoTayna
2 years agoGerman
2 years agoAsuncion
2 years agoRebeca
2 years agoCarissa
2 years agoSkye
2 years agoYen
2 years agoErasmo
2 years agoLaura
2 years agoGlynda
2 years agoKaran
2 years ago