Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 10 Question 58 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 58
Topic #: 10
[All Professional Machine Learning Engineer Questions]

You have been asked to productionize a proof-of-concept ML model built using Keras. The model was trained in a Jupyter notebook on a data scientist's local machine. The notebook contains a cell that performs data validation and a cell that performs model analysis. You need to orchestrate the steps contained in the notebook and automate the execution of these steps for weekly retraining. You expect much more training data in the future. You want your solution to take advantage of managed services while minimizing cost. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Dorothy
14 days ago
I hope the exam doesn't ask us to 'productionize' a model using a typewriter and a fax machine. That would be a real challenge!
upvoted 0 times
...
Tracey
15 days ago
If this were a recipe, I'd say 'add a dash of TFX and a sprinkle of Vertex AI Pipelines for the perfect ML automation solution'.
upvoted 0 times
...
Sanda
17 days ago
Option A sounds like the easiest solution, but I'm worried about the cost of running a Notebooks instance on a large machine type. Managed services could be more cost-effective in the long run.
upvoted 0 times
...
Kati
28 days ago
I'm not sure about option C. Rewriting the code as a Spark job might be overkill for this use case, and it might add unnecessary complexity.
upvoted 0 times
Nickolas
19 days ago
A) Move the Jupyter notebook to a Notebooks instance on the largest N2 machine type, and schedule the execution of the steps in the Notebooks instance using Cloud Scheduler.
upvoted 0 times
...
...
Goldie
1 months ago
Option D looks interesting too. Airflow could provide a flexible way to orchestrate the different steps, and it's a popular tool in the industry.
upvoted 0 times
Nieves
23 hours ago
Option D looks interesting too. Airflow could provide a flexible way to orchestrate the different steps, and it's a popular tool in the industry.
upvoted 0 times
...
Afton
9 days ago
D) Extract the steps contained in the Jupyter notebook as Python scripts, wrap each script in an Apache Airflow BashOperator, and run the resulting directed acyclic graph (DAG) in Cloud Composer.
upvoted 0 times
...
Donte
21 days ago
B) Write the code as a TensorFlow Extended (TFX) pipeline orchestrated with Vertex AI Pipelines. Use standard TFX components for data validation and model analysis, and use Vertex AI Pipelines for model retraining.
upvoted 0 times
...
...
Nichelle
2 months ago
I'm not sure about option B. I think option D could also work well by using Apache Airflow to orchestrate the steps in the Python scripts.
upvoted 0 times
...
Antonette
2 months ago
I agree with Jade. Option B seems like the most scalable and cost-effective solution for productionizing the ML model.
upvoted 0 times
...
Jade
2 months ago
I think option B is the best choice because using TFX pipeline with Vertex AI Pipelines will help automate the steps and handle the increasing amount of training data efficiently.
upvoted 0 times
...
Wilford
2 months ago
I'm leaning towards option B. Using TFX and Vertex AI Pipelines seems like a good way to take advantage of managed services and scale the solution as the data grows.
upvoted 0 times
Glen
1 months ago
I agree, using managed services like TFX and Vertex AI Pipelines will definitely help with scalability and cost-effectiveness in the long run.
upvoted 0 times
...
Floyd
1 months ago
Option B sounds like a solid choice. TFX and Vertex AI Pipelines can handle the increased data and automate the process efficiently.
upvoted 0 times
...
...

Save Cancel