Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional-Machine-Learning-Engineer Topic 6 Question 71 Discussion

Actual exam question for Google's Google Professional Machine Learning Engineer exam
Question #: 71
Topic #: 6
[All Google Professional Machine Learning Engineer Questions]

You recently deployed a model lo a Vertex Al endpoint and set up online serving in Vertex Al Feature Store. You have configured a daily batch ingestion job to update your featurestore During the batch ingestion jobs you discover that CPU utilization is high in your featurestores online serving nodes and that feature retrieval latency is high. You need to improve online serving performance during the daily batch ingestion. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Vertex AI Feature Store provides two options for online serving: Bigtable and optimized online serving. Both options support autoscaling, which means that the number of online serving nodes can automatically adjust to the traffic demand. By enabling autoscaling, you can improve the online serving performance and reduce the feature retrieval latency during the daily batch ingestion. Autoscaling also helps you optimize the cost and resource utilization of your featurestore.Reference:

Online serving | Vertex AI | Google Cloud

New Vertex AI Feature Store: BigQuery-Powered, GenAI-Ready | Google Cloud Blog


Contribute your Thoughts:

Jina
8 days ago
But what about option D? Increasing the worker counts in the batch ingestion job could also help distribute the load and reduce the impact on online serving, no?
upvoted 0 times
...
Maia
9 days ago
I'm leaning towards option B, enabling autoscaling of the online serving nodes. That should help the featurestore handle the increased load without us having to manually adjust the node count.
upvoted 0 times
...
Sherly
10 days ago
Ha, imagine if we had to manually adjust the worker counts every time. 'Okay, everyone, stop what you're doing, it's time for the daily batch ingestion! Quick, someone count the nodes and tell me how many workers we need!'
upvoted 0 times
...
Corrinne
11 days ago
Hmm, I'm not sure. Autoscaling seems like the more elegant solution to me. I don't want to have to manually adjust the worker counts every time we have a batch ingestion job.
upvoted 0 times
...

Save Cancel