You have recently developed a new ML model in a Jupyter notebook. You want to establish a reliable and repeatable model training process that tracks the versions and lineage of your model artifacts. You plan to retrain your model weekly. How should you operationalize your training process?
The best way to operationalize your training process is to use Vertex AI Pipelines, which allows you to create and run scalable, portable, and reproducible workflows for your ML models. Vertex AI Pipelines also integrates with Vertex AI Metadata, which tracks the provenance, lineage, and artifacts of your ML models. By using a Vertex AI CustomTrainingJobOp component, you can train your model using the same code as in your Jupyter notebook. By using a ModelUploadOp component, you can upload your trained model to Vertex AI Model Registry, which manages the versions and endpoints of your models. By using Cloud Scheduler and Cloud Functions, you can trigger your Vertex AI pipeline to run weekly, according to your plan.Reference:
Vertex AI Pipelines documentation
Vertex AI Metadata documentation
Vertex AI CustomTrainingJobOp documentation
[Cloud Functions documentation]
Danica
12 months agoJenise
11 months agoGail
11 months agoGlendora
11 months agoFrancisca
12 months agoTamera
12 months agoCyril
12 months agoLindsey
12 months agoLashawnda
11 months agoVivienne
11 months agoVanda
12 months agoMiriam
1 years agoChauncey
11 months agoChristene
11 months agoAnnmarie
12 months agoDawne
12 months agoMerissa
1 years ago