New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Cloud Developer Exam - Topic 14 Question 91 Discussion

Actual exam question for Google's Professional Cloud Developer exam
Question #: 91
Topic #: 14
[All Professional Cloud Developer Questions]

You are developing an online gaming platform as a microservices application on Google Kubernetes Engine (GKE). Users on social media are complaining about long loading times for certain URL requests to the application. You need to investigate performance bottlenecks in the application and identify which HTTP requests have a significantly high latency span in user requests. What should you do9

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

0/2000 characters
Mollie
3 months ago
C is definitely the way to go for detailed insights!
upvoted 0 times
...
Kami
3 months ago
A sounds good for logging, but is it enough?
upvoted 0 times
...
Margart
4 months ago
Wait, does tcpdump really help with latency issues?
upvoted 0 times
...
Karrie
4 months ago
I disagree, D seems more straightforward for monitoring.
upvoted 0 times
...
Destiny
4 months ago
I heard option C is the best for tracing requests.
upvoted 0 times
...
Alayna
4 months ago
I recall that configuring GKE workload metrics in Cloud Monitoring can help visualize performance, but I wonder if it’s enough to find specific bottlenecks.
upvoted 0 times
...
Lorrie
5 months ago
I feel like capturing network traffic with tcpdump could give us some insights, but it seems a bit complicated for just checking latency.
upvoted 0 times
...
Tresa
5 months ago
I think I practiced a question similar to this where we had to use OpenTelemetry for tracing. It might help us identify where the delays are happening.
upvoted 0 times
...
Charlette
5 months ago
I remember something about using Cloud Logging to track HTTP requests, but I'm not sure if that's the best way to pinpoint latency issues.
upvoted 0 times
...
Fanny
5 months ago
I'm leaning towards option D on this one. Monitoring the GKE cluster metrics in Cloud Monitoring could give me a high-level view of where the performance issues are, and then I can dig deeper from there. The tracing approach in option C also sounds promising, but I'm not sure I have time to implement that in the exam setting.
upvoted 0 times
...
Beatriz
5 months ago
Option A looks like a good starting point to me. Logging the request details and using Cloud Logging to analyze the latency patterns seems like a simple but effective way to identify the problematic requests. I might go with that unless I see a really compelling reason to use a more complex solution.
upvoted 0 times
...
Blair
5 months ago
Hmm, I'm a bit unsure about this one. The options all seem reasonable, but I'm not super familiar with some of the tools like tcpdump and Cloud Logging. I'll have to read through the details carefully to decide which one I think is the best approach.
upvoted 0 times
...
Lemuel
5 months ago
This seems like a pretty straightforward performance troubleshooting question. I think option C is the way to go - using Open Telemetry to instrument the application and get detailed tracing data would be the most comprehensive approach.
upvoted 0 times
...
Tish
5 months ago
I think the first step is identifying the target audience. That seems like the logical starting point to plan an effective advertising campaign.
upvoted 0 times
...
Belen
5 months ago
BPaaS sounds like the most relevant option since the question is specifically about a hosted IP phone PBX, which is a business process. I'll go with that.
upvoted 0 times
...
Jeanice
1 year ago
Hold up, where's the 'Hire a psychic' option? I bet they could just sense the performance bottlenecks and solve the problem instantly.
upvoted 0 times
...
Filiberto
1 year ago
Option B with tcpdump? That's so old-school, I thought we were beyond that in the Kubernetes era. Might as well use a telegraph to debug your application.
upvoted 0 times
Selene
1 year ago
D) Configure GKE workload metrics using kubect1. Select all Pods to send their metrics to Cloud Monitoring. Create a custom dashboard of application metrics in Cloud Monitoring to determine performance bottlenecks of your GKE cluster.
upvoted 0 times
...
Marnie
1 year ago
C) Instrument your microservices by installing the Open Telemetry tracing package. Update your application code to send traces to Trace for inspection and analysis. Create an analysis report on Trace to analyze user requests
upvoted 0 times
...
Katina
1 year ago
A) Update your microservices lo log HTTP request methods and URL paths to STDOUT Use the logs router to send container logs to Cloud Logging. Create fillers in Cloud Logging to evaluate the latency of user requests across different methods and URL paths.
upvoted 0 times
...
...
Ngoc
1 year ago
Configuring GKE workload metrics in Cloud Monitoring could also help us identify performance bottlenecks.
upvoted 0 times
...
Valentin
1 year ago
I think we should also consider installing the Open Telemetry tracing package to analyze user requests.
upvoted 0 times
...
Rupert
1 year ago
Hmm, option D seems the most comprehensive. Monitoring the GKE cluster metrics in Cloud Monitoring could give you a wide range of insights into the performance issues.
upvoted 0 times
...
Azalee
1 year ago
I'd go with option A. Logging is a good start, and analyzing the logs in Cloud Logging should give you a good idea of which requests are taking too long.
upvoted 0 times
Stevie
1 year ago
C) Instrument your microservices by installing the Open Telemetry tracing package. Update your application code to send traces to Trace for inspection and analysis. Create an analysis report on Trace to analyze user requests
upvoted 0 times
...
Malcolm
1 year ago
B) Install tcpdiimp on your GKE nodes. Run tcpdunm-- to capture network traffic over an extended period of time to collect data. Analyze the data files using Wireshark to determine the cause of high latency
upvoted 0 times
...
Loreen
1 year ago
Loreen: I'd go with option A. Logging is a good start, and analyzing the logs in Cloud Logging should give you a good idea of which requests are taking too long.
upvoted 0 times
...
Giovanna
1 year ago
A) Update your microservices lo log HTTP request methods and URL paths to STDOUT Use the logs router to send container logs to Cloud Logging. Create fillers in Cloud Logging to evaluate the latency of user requests across different methods and URL paths.
upvoted 0 times
...
...
Hyman
1 year ago
Option C sounds like the way to go. Tracing is crucial for identifying performance bottlenecks in a microservices architecture. I'm glad they mentioned Open Telemetry, it's a great tool for this.
upvoted 0 times
Ezekiel
1 year ago
Absolutely, using Open Telemetry for tracing can help pinpoint where the bottlenecks are occurring and improve the overall user experience on the gaming platform.
upvoted 0 times
...
Elden
1 year ago
I agree, tracing with Open Telemetry can provide detailed insights into the latency of user requests. It's essential for optimizing performance in a microservices environment.
upvoted 0 times
...
Kyoko
1 year ago
Option C sounds like the way to go. Tracing is crucial for identifying performance bottlenecks in a microservices architecture. I'm glad they mentioned Open Telemetry, it's a great tool for this.
upvoted 0 times
...
...
Golda
1 year ago
I agree with Trinidad. Using logs to evaluate latency across different methods and URL paths is a good idea.
upvoted 0 times
...
Trinidad
1 year ago
I think we should update our microservices to log HTTP requests and analyze the latency.
upvoted 0 times
...

Save Cancel