Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 6 Question 66 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 66
Topic #: 6
[All Professional Machine Learning Engineer Questions]

You work for a social media company. You want to create a no-code image classification model for an iOS mobile application to identify fashion accessories You have a labeled dataset in Cloud Storage You need to configure a training workflow that minimizes cost and serves predictions with the lowest possible latency What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Applying quantization to your SavedModel by reducing the floating point precision can help reduce the serving latency by decreasing the amount of memory and computation required to make a prediction. TensorFlow provides tools such as the tf.quantization module that can be used to quantize models and reduce their precision, which can significantly reduce serving latency without a significant decrease in model performance.


Contribute your Thoughts:

Gwen
18 days ago
Yo, I heard AutoML is like the easy mode of machine learning. Might as well just go with that and let the experts handle the hard stuff, am I right?
upvoted 0 times
...
Coral
20 days ago
I'm gonna go with D. Vertex AI endpoint seems like the easiest option, even if it might cost a bit more.
upvoted 0 times
...
Dusti
1 months ago
Hmm, I'm not sure. Option A with the batch requests might be a bit slower, but at least I don't have to worry about the model deployment.
upvoted 0 times
Jani
22 days ago
Option A sounds good, batch requests can help with prediction speed.
upvoted 0 times
...
...
Gregoria
1 months ago
Option C looks good to me. Exporting the TFLite model and using it directly in the mobile app should give us the lowest possible latency.
upvoted 0 times
...
Cherilyn
2 months ago
I think option B is the way to go. Training with AutoML Edge and using the Core ML model directly on the mobile app sounds like the best approach to minimize cost and latency.
upvoted 0 times
Levi
23 days ago
Yeah, using AutoML Edge and exporting as a Core ML model for direct use on the mobile app makes sense.
upvoted 0 times
...
Layla
1 months ago
I agree, option B seems like the most efficient choice for this scenario.
upvoted 0 times
...
...
Kimberely
2 months ago
That's a valid point, but I still think option A provides better scalability and flexibility for future updates in the model.
upvoted 0 times
...
Bulah
2 months ago
I disagree, I believe option B is more suitable as it utilizes AutoML Edge and Core ML model for direct integration with the mobile application.
upvoted 0 times
...
Kimberely
2 months ago
I think option A is the best choice because it involves using AutoML and Vertex AI Model Registry for efficient model training and prediction.
upvoted 0 times
...

Save Cancel