Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Cloud Database Engineer Exam - Topic 10 Question 14 Discussion

Actual exam question for Google's Professional Cloud Database Engineer exam
Question #: 14
Topic #: 10
[All Professional Cloud Database Engineer Questions]

You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

0/2000 characters
Christiane
5 months ago
Wait, can Memorystore even manage that kind of load?
upvoted 0 times
...
Cassi
5 months ago
D sounds solid, but can it really handle 100 TB?
upvoted 0 times
...
Aliza
6 months ago
Cloud SQL? Really? That seems risky for millions of requests.
upvoted 0 times
...
Noble
6 months ago
I think Firestore might struggle with that scale.
upvoted 0 times
...
Catalina
6 months ago
Bigtable is designed for high throughput!
upvoted 0 times
...
Quentin
6 months ago
Bigtable sounds like a good fit since it’s designed for high throughput and can scale easily, so I’m leaning towards option D.
upvoted 0 times
...
Paulene
6 months ago
I feel like Memorystore is more for caching rather than long-term storage, so option C seems off to me.
upvoted 0 times
...
Lorriane
6 months ago
I think Firestore could work for automatic scaling, but I’m not confident it can handle the latency requirements.
upvoted 0 times
...
Flo
6 months ago
I remember we discussed how Cloud SQL might struggle with such high throughput, so I’m not sure about option A.
upvoted 0 times
...
Beula
6 months ago
I'm a bit confused by the wording of these options. I'll need to re-read them a few times to make sure I understand the differences.
upvoted 0 times
...
Venita
6 months ago
I think the best option is definitely A, since we want to follow the requirements clearly for naming in the SOAP message.
upvoted 0 times
...
Xuan
6 months ago
Hmm, I'm a bit confused on this one. I thought there was a way to prevent the contract address from changing, but I can't remember the details. I'll have to think this through carefully.
upvoted 0 times
...
Rebeca
7 months ago
B seems right - there's no guaranteed order of execution, so I'll need to design my code to handle that scenario.
upvoted 0 times
...
Gracia
11 months ago
I'm just picturing the poor database admin trying to keep up with adding Bigtable nodes like a hamster on a wheel. 'Wheee, another node! Wheee, another node!'
upvoted 0 times
Evelynn
10 months ago
D) Bigtable seems like the best option for handling the required throughput.
upvoted 0 times
...
Lavonne
10 months ago
C) Memorystore for Memcached sounds like a good choice for adding nodes as needed.
upvoted 0 times
...
Tijuana
10 months ago
B) I think Firestore would be a better option for automatic scaling.
upvoted 0 times
...
Kenneth
10 months ago
A) Use Cloud SQL with read replicas for throughput.
upvoted 0 times
...
...
Melodie
11 months ago
Cloud SQL with read replicas? Really? That's like trying to use a bicycle to haul a freight train. This application needs serious big-data firepower, not your grandpa's SQL database.
upvoted 0 times
Katina
9 months ago
Cloud SQL with read replicas won't cut it for this kind of workload.
upvoted 0 times
...
Hoa
10 months ago
D) Use Bigtable, and add nodes as necessary to achieve the required throughput.
upvoted 0 times
...
Sherron
10 months ago
A) Use Bigtable, and add nodes as necessary to achieve the required throughput.
upvoted 0 times
...
...
Cassie
11 months ago
Ooh, Memorystore for Memcached? That's an interesting idea! But I'm not sure if it can keep up with the insane throughput and data volume this application needs. Definitely not a good fit in my opinion.
upvoted 0 times
...
Aileen
11 months ago
I'm not so sure about Bigtable. What about Firestore? It's serverless, so you don't have to worry about scaling it up yourself. And it can probably handle the data volume and throughput requirements.
upvoted 0 times
Lezlie
10 months ago
D) Use Bigtable, and add nodes as necessary to achieve the required throughput.
upvoted 0 times
...
Martina
10 months ago
B) Use Firestore, and rely on automatic serverless scaling.
upvoted 0 times
...
Taryn
10 months ago
A) Use Cloud SQL with read replicas for throughput.
upvoted 0 times
...
...
Kristofer
12 months ago
Hmm, I think option D is the way to go. Bigtable can handle massive amounts of data and scale up to millions of requests per second. Plus, it's designed for time-series data like IoT sensor data.
upvoted 0 times
Ashlyn
11 months ago
User 2: Yeah, it's definitely built for handling large amounts of data and high throughput.
upvoted 0 times
...
Leota
11 months ago
User 1: I agree, Bigtable seems like the best choice for this scenario.
upvoted 0 times
...
...
Helene
1 year ago
I'm leaning towards option A, Cloud SQL with read replicas, as it provides good throughput and reliability for our needs.
upvoted 0 times
...
Juliana
1 year ago
I disagree, I believe option B, Firestore, would be better as it offers automatic scaling and is serverless.
upvoted 0 times
...
Rolande
1 year ago
I think we should go with option D, Bigtable, because it can handle massive amounts of data and scale easily.
upvoted 0 times
...

Save Cancel