New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Cloud DevOps Engineer Exam - Topic 1 Question 66 Discussion

Actual exam question for Google's Professional Cloud DevOps Engineer exam
Question #: 66
Topic #: 1
[All Professional Cloud DevOps Engineer Questions]

You need to create a Cloud Monitoring SLO for a service that will be published soon. You want to verify that requests to the service will be addressed in fewer than 300 ms at least 90% Of the time per calendar month. You need to identify the metric and evaluation method to use. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

0/2000 characters
Jenelle
3 months ago
C is a no-go; we need to focus on latency metrics for this SLO.
upvoted 0 times
...
Dorothea
3 months ago
90% uptime is pretty standard, but 300 ms seems tight!
upvoted 0 times
...
Abraham
3 months ago
Wait, are we really measuring latency or availability here?
upvoted 0 times
...
Anjelica
4 months ago
I think B could work too, but not sure it's the best choice.
upvoted 0 times
...
Jacqueline
4 months ago
A is definitely the way to go for latency!
upvoted 0 times
...
Elli
4 months ago
I lean towards option A, but I wonder if there's a scenario where a window-based method might be better for this kind of SLO.
upvoted 0 times
...
Ena
4 months ago
I practiced a similar question, and I feel like selecting a latency metric is key here, but I can't recall if it should be request-based or window-based.
upvoted 0 times
...
Rodolfo
4 months ago
I'm not entirely sure, but I remember something about window-based evaluations being useful for tracking performance over time.
upvoted 0 times
...
Glenna
5 months ago
I think we should focus on latency since the question specifies response times. A request-based method seems more appropriate for that.
upvoted 0 times
...
Oren
5 months ago
This seems pretty straightforward. The question is asking for a latency SLO, so I'll go with option A to select a latency metric and a request-based evaluation method.
upvoted 0 times
...
Cecil
5 months ago
I'm not totally sure about this one. I think I need to understand the difference between the evaluation methods before I can decide. Maybe I'll come back to this one after reviewing that part.
upvoted 0 times
...
Louvenia
5 months ago
Okay, I've got this. I need to select a latency metric and a request-based evaluation method to meet the 90% of requests under 300ms per month requirement. Option A seems like the right choice here.
upvoted 0 times
...
Shantay
5 months ago
Hmm, I'm a bit confused. Do we need to use a latency metric or an availability metric? And what's the difference between a request-based and window-based evaluation method?
upvoted 0 times
...
Delsie
5 months ago
This looks like a straightforward SLO question. I think the key is to focus on the requirement of 90% of requests being addressed in under 300ms per calendar month.
upvoted 0 times
...
Paola
5 months ago
Hmm, this looks like a tricky one. I'll need to carefully read through the options and think about how to create the QlikView documents with the Year field.
upvoted 0 times
...
Tayna
5 months ago
I'm leaning towards Average Value, but I'm not 100% confident. I'll have to review my notes on how Analytics policies work in NetProfiler.
upvoted 0 times
...
Tommy
9 months ago
A) is the way to go. Gotta love these exam questions that are pretty much fill-in-the-blank. Though I'm still trying to figure out why the service will be 'published soon' - is it going to be a bestselling novel or something?
upvoted 0 times
...
Cruz
9 months ago
Hmm, this one's a no-brainer. A) is the right choice - latency metric and request-based evaluation. Though I'm a bit curious why they didn't just say 'select option A' instead of all this mumbo-jumbo.
upvoted 0 times
...
Domingo
9 months ago
A) is the clear winner here. Latency is the metric you need, and a request-based method is the way to go to meet the 90% threshold per calendar month.
upvoted 0 times
...
Loren
9 months ago
Definitely go with A) Select a latency metric for a request-based method of evaluation. That's the only option that matches the criteria of verifying requests are addressed in fewer than 300 ms at least 90% of the time.
upvoted 0 times
Terina
8 months ago
D) Go with a latency metric for a time-based method of evaluation.
upvoted 0 times
...
Tien
8 months ago
C) Opt for an error rate metric for a request-based method of evaluation.
upvoted 0 times
...
Jonell
9 months ago
B) Choose a throughput metric for a time-based method of evaluation.
upvoted 0 times
...
Magnolia
9 months ago
A) Select a latency metric for a request-based method of evaluation.
upvoted 0 times
...
...
Kate
10 months ago
I disagree, I believe we should select an availability metric for a window-based method of evaluation.
upvoted 0 times
...
Ashton
10 months ago
I agree with Lashanda, it makes sense to track latency for this type of service.
upvoted 0 times
...
Lashanda
10 months ago
I think we should select a latency metric for a request-based method of evaluation.
upvoted 0 times
...
Elena
11 months ago
I disagree, I believe we should select an availability metric for a window-based method of evaluation.
upvoted 0 times
...
Isaac
11 months ago
I agree with Dottie, it makes sense to track latency for this type of service.
upvoted 0 times
...
Dottie
11 months ago
I think we should select a latency metric for a request-based method of evaluation.
upvoted 0 times
...

Save Cancel