New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Cloud Developer Exam - Topic 13 Question 70 Discussion

Actual exam question for Google's Professional Cloud Developer exam
Question #: 70
Topic #: 13
[All Professional Cloud Developer Questions]

You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which source code is consuming the most CPU and memory resources. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

0/2000 characters
Deandrea
3 months ago
Wait, you can use the Snapshot Debugger for this? I had no idea!
upvoted 0 times
...
Caprice
3 months ago
D seems like a lot of manual work for something that could be automated.
upvoted 0 times
...
Yvette
4 months ago
C sounds interesting, but does it really help with CPU usage?
upvoted 0 times
...
Malcolm
4 months ago
I think A could work too, but it seems more complicated.
upvoted 0 times
...
Tyra
4 months ago
B is definitely the way to go for profiling in GCP.
upvoted 0 times
...
Reuben
4 months ago
The Cloud Logging query option seems like a workaround. I feel like it might not be as effective as directly profiling the application.
upvoted 0 times
...
Carey
4 months ago
I recall that OpenTelemetry can provide insights into tracing, but I wonder if it gives enough detail on CPU and memory specifically.
upvoted 0 times
...
Kaycee
4 months ago
I practiced a similar question where we had to identify performance bottlenecks. I feel like the Snapshot Debugger might be useful, but it seems more complex than necessary.
upvoted 0 times
...
Edna
5 months ago
I think using the Cloud Profiler package sounds familiar. I remember it helps visualize performance issues, but I'm not sure if it's the best choice here.
upvoted 0 times
...
Lou
5 months ago
I'm feeling pretty confident about this one. I think option A is the way to go. Downloading the Snapshot Debugger and analyzing the call stack and local variables seems like the most thorough approach to identify the source code consuming the most resources.
upvoted 0 times
...
Caitlin
5 months ago
Option C looks promising to me. Using OpenTelemetry and Trace to analyze the latency data for the application could help pinpoint where the bottlenecks are occurring. That seems like a good way to get to the root of the performance issues.
upvoted 0 times
...
Roselle
5 months ago
Hmm, I'm a bit confused by all the different tools and packages mentioned. I'll need to review the details of each option more carefully to decide which one is the most appropriate for this scenario.
upvoted 0 times
...
Delmy
5 months ago
This seems like a tricky one. I'm not sure if I fully understand the different options, but I think B might be the best approach since it mentions using the Cloud Profiler to identify time-intensive functions.
upvoted 0 times
...
Mari
5 months ago
Okay, let me see... I know correlation rules are used to identify patterns across multiple events, so that seems like the most likely answer here.
upvoted 0 times
...
Patti
5 months ago
Hmm, this one's tricky. I'll need to think it through carefully. Maybe start by considering how each option could impact customer satisfaction.
upvoted 0 times
...
Bette
5 months ago
I'm a bit confused by the wording of these options. I'll need to think through each one carefully to determine the correct purpose of provisioning.
upvoted 0 times
...
Carman
5 months ago
I remember practicing similar questions, and I think the one about the signal strength at location C being too weak for web surfing could definitely be true based on typical thresholds.
upvoted 0 times
...
Jutta
5 months ago
Okay, I've got a strategy. I'll eliminate the options that don't seem relevant for remote file access, like HTTPS, and focus on the more common file transfer protocols.
upvoted 0 times
...

Save Cancel