Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 3 Question 99 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 99
Topic #: 3
[All Professional Machine Learning Engineer Questions]

You are developing an image recognition model using PyTorch based on ResNet50 architecture. Your code is working fine on your local laptop on a small subsample. Your full dataset has 200k labeled images You want to quickly scale your training workload while minimizing cost. You plan to use 4 V100 GPUs. What should you do? (Choose Correct Answer and Give Reference and Explanation)

Show Suggested Answer Hide Answer
Suggested Answer: C

Traffic splitting is a feature of Vertex AI that allows you to distribute the prediction requests among multiple models or model versions within the same endpoint. You can specify the percentage of traffic that each model or model version receives, and change it at any time. Traffic splitting can help you test the new model in production without creating a new endpoint or a separate service. You can deploy the new model to the existing Vertex AI endpoint, and use traffic splitting to send 5% of production traffic to the new model. You can monitor the end-user metrics, such as listening time, to compare the performance of the new model and the previous model. If the end-user metrics improve between models over time, you can gradually increase the percentage of production traffic sent to the new model. This solution can help you test the new model in production while minimizing complexity and cost.Reference:

Traffic splitting | Vertex AI

Deploying models to endpoints | Vertex AI


Contribute your Thoughts:

0/2000 characters
Beatriz
3 months ago
I thought Vertex AI was only for smaller projects, this is surprising!
upvoted 0 times
...
Kallie
3 months ago
I agree, C is straightforward and efficient!
upvoted 0 times
...
Aileen
4 months ago
Wait, isn't using Kubernetes overkill for this?
upvoted 0 times
...
Shanda
4 months ago
B could work too, but I think C is more user-friendly.
upvoted 0 times
...
Minna
4 months ago
Option C seems like the best choice for quick scaling with those GPUs.
upvoted 0 times
...
Hershel
4 months ago
I lean towards option D because it aligns with what we learned about container orchestration, but I need to double-check the specifics of the TFJob setup.
upvoted 0 times
...
Jody
5 months ago
I feel like using a Compute Engine VM could work, but I can't recall if it would be the most cost-effective compared to the other options.
upvoted 0 times
...
Daren
5 months ago
I think we practiced a similar question where Kubernetes was mentioned. It seems like a good way to manage resources, but I'm unsure about the TFJob operator part.
upvoted 0 times
...
Leonard
5 months ago
I remember we discussed using Vertex AI for scaling, but I'm not sure if a user-managed notebook is the best option for cost efficiency.
upvoted 0 times
...
Kenneth
5 months ago
I'm leaning towards option D. Using a GKE cluster with the required GPU node pool and submitting a TFJob operator could be a more flexible and scalable solution compared to the Vertex AI options.
upvoted 0 times
...
Ardella
5 months ago
Based on the information provided, I believe option C is the best solution. Creating a Vertex AI Workbench instance with the necessary GPUs and training the model directly on that platform seems like the quickest and most cost-effective approach.
upvoted 0 times
...
Mona
5 months ago
Hmm, I'm a bit confused. Should I be looking at creating a custom Vertex AI tier or using the pre-built Vertex AI Workbench instance? I'm not sure which one would be the most efficient approach.
upvoted 0 times
...
Blair
6 months ago
This question seems straightforward. I think option B is the way to go - packaging the code and using a pre-built container to train the model on Vertex AI with the required GPUs.
upvoted 0 times
...
Norah
11 months ago
Wait, I can use my refrigerator as a GPU? *scratches head* Nah, that's probably not a good idea. Time to get serious and go with Option B. Gotta package that code up and let Vertex AI handle the heavy lifting!
upvoted 0 times
Hyun
10 months ago
That's right, Option B is the best choice for scaling your training workload efficiently.
upvoted 0 times
...
Nicholle
10 months ago
Option B sounds like the way to go. Package your code and let Vertex AI do the heavy lifting.
upvoted 0 times
...
Ronny
10 months ago
Yeah, using your refrigerator as a GPU is definitely not a good idea.
upvoted 0 times
...
...
Rhea
11 months ago
Hmm, I'm not sure I trust any of these options. I think I'll just train my model on my local laptop and hope it scales eventually. Who needs fancy cloud infrastructure anyway? *laughs nervously*
upvoted 0 times
Carmelina
10 months ago
Training on your local laptop may work for small datasets, but for 200k labeled images, utilizing cloud resources like V100 GPUs will definitely improve your model's performance and efficiency.
upvoted 0 times
...
Eva
10 months ago
Using cloud infrastructure like Vertex AI can greatly speed up your training process and save you time in the long run. It's worth considering!
upvoted 0 times
...
Jessenia
11 months ago
Option A seems like the best choice for scaling your training workload with minimal cost. You should configure a Compute Engine VM with the necessary dependencies and use Vertex AI with custom tier containing the required GPUs.
upvoted 0 times
...
...
Lashandra
11 months ago
Option D is intriguing, but it sounds a bit more complex than I'd like to deal with. Setting up a GKE cluster and submitting a TFJob operator seems like a lot of work. I'm more interested in a simpler, managed solution like Vertex AI.
upvoted 0 times
...
Ernestine
11 months ago
I'm leaning towards Option C. Creating a Vertex AI Workbench instance with the required GPUs sounds like a great way to get my model training up and running quickly. Plus, I don't have to worry about managing the infrastructure myself.
upvoted 0 times
Theodora
9 months ago
Absolutely, Vertex AI Workbench provides a hassle-free way to scale your training workload efficiently.
upvoted 0 times
...
Melvin
10 months ago
It's definitely a convenient option. Plus, you can focus more on optimizing your model rather than setting up the environment.
upvoted 0 times
...
Bev
10 months ago
I agree, managing the infrastructure myself can be time-consuming. Vertex AI Workbench takes care of that.
upvoted 0 times
...
Asha
10 months ago
Option C seems like a good choice. Using Vertex AI Workbench with 4 V100 GPUs can speed up training.
upvoted 0 times
...
...
Lauran
11 months ago
Option A seems like the easiest way to scale my training workload. I can simply configure a Compute Engine VM with the necessary dependencies and use Vertex AI to train my model. Definitely the fastest and most cost-effective solution.
upvoted 0 times
Layla
10 months ago
Option A seems like the easiest way to scale my training workload. I can simply configure a Compute Engine VM with the necessary dependencies and use Vertex AI to train my model. Definitely the fastest and most cost-effective solution.
upvoted 0 times
...
Arlette
10 months ago
A) Configure a Compute Engine VM with all the dependencies that launches the training Train your model with Vertex AI using a custom tier that contains the required GPUs.
upvoted 0 times
...
...
Jesusita
11 months ago
I'm not sure, but I think option D could also be a valid choice. Creating a GKE cluster with a node pool that has 4 V100 GPUs might be a good solution too.
upvoted 0 times
...
Paul
12 months ago
I agree with Robt. Option A seems like the most efficient way to scale the training workload while minimizing cost.
upvoted 0 times
...
Robt
12 months ago
I think the correct answer is A. It involves configuring a Compute Engine VM with the necessary dependencies and using Vertex AI with the required GPUs.
upvoted 0 times
...

Save Cancel