You use the Azure Machine Learning designer to create and run a training pipeline.
The pipeline must be run every night to inference predictions from a large volume of files. The folder where the files will be stored is defined as a dataset.
You need to publish the pipeline as a REST service that can be used for the nightly inferencing run.
What should you do?
Azure Machine Learning Batch Inference targets large inference jobs that are not time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It is optimized for high-throughput, fire-and-forget inference over large collections of data.
You can submit a batch inference job by pipeline_run, or through REST calls with a published pipeline.
Christiane
9 days agoTherese
14 days agoElouise
20 days agoOlene
25 days agoMose
30 days agoSylvie
1 month agoRima
1 month agoCruz
1 month agoHillary
1 month agoLacey
1 month agoDick
1 month agoLaurene
1 month ago