New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 8 Question 56 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 56
Topic #: 8
[All Professional Machine Learning Engineer Questions]

You work for a company that manages a ticketing platform for a large chain of cinemas. Customers use a mobile app to search for movies they're interested in and purchase tickets in the app. Ticket purchase requests are sent to Pub/Sub and are processed with a Dataflow streaming pipeline configured to conduct the following steps:

1. Check for availability of the movie tickets at the selected cinema.

2. Assign the ticket price and accept payment.

3. Reserve the tickets at the selected cinema.

4. Send successful purchases to your database.

Each step in this process has low latency requirements (less than 50milliseconds). You have developed a logistic regression model with BigQuery ML that predicts whether offering a promo code for free popcorn increases the chance of a ticket purchase, and this prediction should be added to the ticket purchase process. You want to identify the simplest way to deploy this model to production while adding minimal latency. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Malcom
4 months ago
B is solid too, but C just feels right for real-time needs.
upvoted 0 times
...
Janella
4 months ago
Wait, can TensorFlow Lite really handle this? Sounds risky!
upvoted 0 times
...
Lizette
4 months ago
A batch process every five minutes? That's too slow!
upvoted 0 times
...
Leonida
4 months ago
Not sure about that, D seems more efficient to me.
upvoted 0 times
...
Kimberlie
4 months ago
I think option C is the best for low latency!
upvoted 0 times
...
Merlyn
5 months ago
I feel like TFLite could be a good choice for mobile, but I’m not sure how it integrates with Pub/Sub for real-time requests.
upvoted 0 times
...
Nettie
5 months ago
I think exporting the model to TensorFlow and using it in the Dataflow pipeline could work, but I’m not clear on how that would affect performance.
upvoted 0 times
...
Miles
5 months ago
Option C sounds familiar, but I’m not entirely sure if querying the prediction endpoint from the streaming pipeline would really keep the latency under 50ms.
upvoted 0 times
...
Lynna
5 months ago
I remember we discussed how batch inference might not meet the low latency requirement, so I’m leaning away from option A.
upvoted 0 times
...
Willodean
5 months ago
Okay, I think I've got a handle on this. I just need to make sure the namespaces and element/attribute names line up exactly with the original.
upvoted 0 times
...
Doretha
5 months ago
I've got this! The answer is clearly option A - enabling Cloud IAP and adding the operations partner as a Cloud IAP Tunnel User. That way, they can access the instances without needing Google Accounts. Easy peasy!
upvoted 0 times
...
Jackie
5 months ago
All of the options listed seem like valid virtualization platforms, but I want to make sure I understand which ones are specifically used for 5G Cloud Core. I'll need to double-check the course materials.
upvoted 0 times
...
Lenita
5 months ago
I feel like a compiler is for turning code into something runnable, not for editing or previewing in browsers. Maybe it's a GUI HTML editor?
upvoted 0 times
...
Corinne
5 months ago
The Cluster Superuser privilege sounds important, but I'm not sure exactly what it entails. I'll need to make sure I understand the implications of granting that level of access to Tenant Admins.
upvoted 0 times
...

Save Cancel