New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Cloud Architect Exam - Topic 7 Question 66 Discussion

Actual exam question for Google's Professional Cloud Architect exam
Question #: 66
Topic #: 7
[All Professional Cloud Architect Questions]

Your company has an application running on Google Cloud that is collecting data from thousands of physical devices that are globally distributed. Data is publish to Pub/Sub and streamed in real time into an SSO Cloud Bigtable cluster via a Dataflow pipeline. The operations team informs you that your Cloud Bigtable cluster has a hot-spot and queries are taking longer man expected You need to resolve the problem and prevent it from happening in the future What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

0/2000 characters
Cheryll
4 months ago
Spreading keys across the alphabet is a must!
upvoted 0 times
...
Mireya
4 months ago
Surprised deleting old records is even an option!
upvoted 0 times
...
Filiberto
4 months ago
HBase APIs? Not sure that’s the solution here.
upvoted 0 times
...
Katheryn
4 months ago
I think doubling the nodes could help too.
upvoted 0 times
...
Argelia
4 months ago
Definitely need to review the RowKey strategy!
upvoted 0 times
...
Lili
5 months ago
I vaguely recall something about evenly spreading keys across the alphabet being a best practice. That might be the right approach here.
upvoted 0 times
...
Ettie
5 months ago
I practiced a similar question about optimizing Bigtable performance, and I think doubling the nodes could help, but it might not address the hot-spot problem directly.
upvoted 0 times
...
Annice
5 months ago
I'm not entirely sure, but I feel like just deleting old records might not solve the underlying issue. It seems more like a temporary fix.
upvoted 0 times
...
Laurel
5 months ago
I remember we discussed hot-spotting in Cloud Bigtable during our study sessions. I think reviewing the RowKey strategy could help distribute the load better.
upvoted 0 times
...
Bobbye
5 months ago
I vaguely recall something about evenly spreading keys across the alphabet being a best practice. That might be the right approach here.
upvoted 0 times
...
Scarlet
5 months ago
I practiced a similar question about optimizing Bigtable performance, and I think doubling the nodes could help, but it might not address the hot-spot problem directly.
upvoted 0 times
...
Marnie
5 months ago
I'm not entirely sure, but I feel like just deleting old records might not solve the underlying issue. It seems more like a temporary fix.
upvoted 0 times
...
Staci
5 months ago
I remember we discussed hot-spotting in Cloud Bigtable during our study sessions. I think reviewing the RowKey strategy could help distribute the load better.
upvoted 0 times
...
Karan
5 months ago
Hmm, I'm a bit unsure about this. I know the STEP model is about testing, but I can't quite remember the details. I'll have to think this through carefully.
upvoted 0 times
...
Detra
5 months ago
Ah, I remember learning about this in class. The daemon that handles the interface between the Veritas commands and the kernel is vxconfigd. That's the right answer, B.
upvoted 0 times
...
Jeff
5 months ago
I think the answer is www.validator.w3.org, but I'm not completely sure. We covered a few validation tools in class.
upvoted 0 times
...

Save Cancel