New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 6 Question 79 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 79
Topic #: 6
[All Professional Machine Learning Engineer Questions]

You have recently developed a new ML model in a Jupyter notebook. You want to establish a reliable and repeatable model training process that tracks the versions and lineage of your model artifacts. You plan to retrain your model weekly. How should you operationalize your training process?

Show Suggested Answer Hide Answer
Suggested Answer: C

The best way to operationalize your training process is to use Vertex AI Pipelines, which allows you to create and run scalable, portable, and reproducible workflows for your ML models. Vertex AI Pipelines also integrates with Vertex AI Metadata, which tracks the provenance, lineage, and artifacts of your ML models. By using a Vertex AI CustomTrainingJobOp component, you can train your model using the same code as in your Jupyter notebook. By using a ModelUploadOp component, you can upload your trained model to Vertex AI Model Registry, which manages the versions and endpoints of your models. By using Cloud Scheduler and Cloud Functions, you can trigger your Vertex AI pipeline to run weekly, according to your plan.Reference:

Vertex AI Pipelines documentation

Vertex AI Metadata documentation

Vertex AI CustomTrainingJobOp documentation

ModelUploadOp documentation

Cloud Scheduler documentation

[Cloud Functions documentation]


Contribute your Thoughts:

0/2000 characters
Leota
3 months ago
Not sure about D, hyperparameter tuning every week sounds excessive.
upvoted 0 times
...
Millie
3 months ago
C looks good too, especially with the managed pipeline!
upvoted 0 times
...
Dierdre
3 months ago
Wait, what's the difference between CustomTrainingJob and CustomJob?
upvoted 0 times
...
Barb
4 months ago
I agree, B covers all the bases!
upvoted 0 times
...
Catarina
4 months ago
Option B seems solid for tracking model artifacts.
upvoted 0 times
...
Mose
4 months ago
I remember something about hyperparameter tuning, but I'm not confident if option D is necessary for just retraining weekly.
upvoted 0 times
...
Adell
4 months ago
I practiced a similar question where we had to set up a pipeline, and I think option C is the right approach since it mentions managed pipelines and scheduling.
upvoted 0 times
...
Ressie
4 months ago
I'm not entirely sure, but I feel like using the CustomTrainingJob class in option A might be too simplistic for what we need.
upvoted 0 times
...
Veronika
5 months ago
I think option B sounds familiar because it mentions the Metadata API, which I remember is important for tracking model artifacts.
upvoted 0 times
...
Roxane
5 months ago
I'm not too familiar with Vertex AI Pipelines, but option C seems to hit all the right points. I'll need to do some research on how to set up the pipeline and integrate it with the other Vertex AI services.
upvoted 0 times
...
Gilma
5 months ago
Option C sounds like the way to go. Managed pipelines, model registry, and scheduled execution - that should give me a reliable and repeatable model training process. I feel confident I can implement this solution.
upvoted 0 times
...
Katy
5 months ago
Hmm, I'm a bit confused about the difference between the CustomTrainingJob and CustomJob classes. I'll need to review the Vertex AI SDK documentation to make sure I understand the right approach.
upvoted 0 times
...
Sherell
5 months ago
This question seems pretty straightforward. I think I'll go with option C - it looks like the most comprehensive solution that covers all the key requirements.
upvoted 0 times
...
Paz
5 months ago
I'm a little confused by this question. I know nominal data is qualitative, but I'm not sure if that means complex functions can't be applied. Let me think this through carefully.
upvoted 0 times
...
Malinda
5 months ago
Hmm, I'm a bit confused by the details about Tidewater's corporate structure and the Diversified Corporation. I'll need to read through that part carefully to make sure I understand the context.
upvoted 0 times
...
Christoper
5 months ago
The balanced mutual fund mention is key. I'll estimate potential returns and then work backwards to determine the required insurance cover amount.
upvoted 0 times
...
Evan
5 months ago
This question seems straightforward - it's asking about design principles that are missing from the website. I think the answer is likely CARP, which stands for Contrast, Alignment, Repetition, and Proximity. Those principles help create a clear and intuitive layout.
upvoted 0 times
...
Danica
2 years ago
Seriously, who comes up with these names? 'CustomTrainingJob', 'CustomJob', 'HyperParameterTuningJobRunOp' - it's like a game of ML-themed Mad Libs!
upvoted 0 times
Jenise
2 years ago
C) Create a managed pipeline in Vertex AI Pipelines to train your model by using a Vertex AI CustomTrainingJob component. Use the ModelUploadOp component to upload your model to Vertex AI Model Registry. Use Cloud Scheduler and Cloud Functions to run the Vertex AI pipeline weekly.
upvoted 0 times
...
Gail
2 years ago
B) Create an instance of the CustomJob class with the Vertex AI SDK to train your model. Use the Metadata API to register your model as a model artifact. Using the Notebooks API, create a scheduled execution to run the training code weekly.
upvoted 0 times
...
Glendora
2 years ago
A) Create an instance of the CustomTrainingJob class with the Vertex AI SDK to train your model. Using the Notebooks API, create a scheduled execution to run the training code weekly.
upvoted 0 times
...
...
Francisca
2 years ago
I agree with Merissa, using the Vertex AI SDK and Notebooks API seems like a reliable approach.
upvoted 0 times
...
Tamera
2 years ago
Hmm, I'm not sure I understand the difference between the CustomTrainingJob and CustomJob classes in Vertex AI. Option A and B seem similar, but C looks more comprehensive.
upvoted 0 times
...
Cyril
2 years ago
I'll go with option C. It has all the bells and whistles, like the Model Registry and weekly scheduling. Plus, it's got a cool name - 'Vertex AI Pipelines'. It's like a superhero team for your ML model!
upvoted 0 times
...
Lindsey
2 years ago
I'm torn between options B and C. Both seem to address the key requirements, but C seems to provide a more managed and scalable solution with the Vertex AI Pipelines.
upvoted 0 times
Lashawnda
2 years ago
You should go with Option C for a more scalable solution. It seems to align better with your requirements.
upvoted 0 times
...
Vivienne
2 years ago
I think Option C might be more efficient. It involves using Vertex AI Pipelines for a managed training process.
upvoted 0 times
...
Vanda
2 years ago
Option B could be a good choice for you. It covers registering your model as an artifact and scheduling weekly training.
upvoted 0 times
...
...
Miriam
2 years ago
Option C looks the most comprehensive and efficient approach to operationalize the training process. Integrating Vertex AI Pipelines, Model Registry, and Cloud Scheduler/Functions seems like the right way to go.
upvoted 0 times
Chauncey
2 years ago
I agree, using Vertex AI Pipelines, Model Registry, and Cloud Scheduler/Functions seems like a solid plan.
upvoted 0 times
...
Christene
2 years ago
I think Option C is the best choice for operationalizing the training process.
upvoted 0 times
...
Annmarie
2 years ago
Yes, using Vertex AI Pipelines, Model Registry, and Cloud Scheduler/Functions together will definitely streamline the training process.
upvoted 0 times
...
Dawne
2 years ago
I agree, option C seems like the best choice for setting up a reliable and repeatable model training process.
upvoted 0 times
...
...
Merissa
2 years ago
I think option A sounds like a good way to operationalize the training process.
upvoted 0 times
...

Save Cancel