Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional-Machine-Learning-Engineer Topic 4 Question 73 Discussion

Actual exam question for Google's Google Professional Machine Learning Engineer exam
Question #: 73
Topic #: 4
[All Google Professional Machine Learning Engineer Questions]

You have a custom job that runs on Vertex Al on a weekly basis The job is Implemented using a proprietary ML workflow that produces the datasets. models, and custom artifacts, and sends them to a Cloud Storage bucket Many different versions of the datasets and models were created Due to compliance requirements, your company needs to track which model was used for making a particular prediction, and needs access to the artifacts for each model. How should you configure your workflows to meet these requirement?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Twana
4 days ago
Hmm, I'm not sure about option B. Relying on autologging in Vertex AI may not give us enough control over the metadata. And a separate TFX metadata database (option A) sounds like overkill for this use case.
upvoted 0 times
...
Eladia
5 days ago
Option D also sounds promising - registering the models in the Vertex AI Model Registry and using labels could be a simple way to manage the versioning and provenance. But I'm not sure how robust that would be for a complex workflow.
upvoted 0 times
...
Honey
6 days ago
I'm leaning towards option C. Using the Vertex AI Metadata API seems like the most direct way to link the models, datasets, and artifacts together. Plus, we can create custom context and execution details to meet the compliance needs.
upvoted 0 times
...
Chaya
7 days ago
Whoa, this question looks like a real brain-teaser! We definitely need to track the models and artifacts for compliance, but it's not clear which option is the best approach.
upvoted 0 times
...

Save Cancel