New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Cloud Database Engineer Exam - Topic 11 Question 33 Discussion

Actual exam question for Google's Professional Cloud Database Engineer exam
Question #: 33
Topic #: 11
[All Professional Cloud Database Engineer Questions]

You want to migrate an on-premises mission-critical PostgreSQL database to Cloud SQL. The database must be able to withstand a zonal failure with less than five minutes of downtime and still not lose any transactions. You want to follow Google-recommended practices for the migration. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

0/2000 characters
Dorthy
3 months ago
Totally agree with D, HA is essential for this scenario!
upvoted 0 times
...
Nana
3 months ago
Nightly snapshots won't cut it for mission-critical stuff.
upvoted 0 times
...
Carolann
3 months ago
Wait, can you really promote a read replica that fast?
upvoted 0 times
...
Walton
4 months ago
I think B is better for real-time replication.
upvoted 0 times
...
Kaitlyn
4 months ago
Option D is the way to go for HA!
upvoted 0 times
...
Lilli
4 months ago
I’m a bit confused about the read replica option. I thought promoting a read replica could still lead to some data loss if it’s not up-to-date.
upvoted 0 times
...
Luisa
4 months ago
I practiced a similar question where enabling HA was the right choice. It seems like D could be the best option here for regional availability.
upvoted 0 times
...
Aracelis
4 months ago
I think option B with the CDC pipeline sounds familiar, but I'm not entirely sure if it guarantees no transaction loss during a failure.
upvoted 0 times
...
Rikki
5 months ago
I remember we discussed that snapshots might not be the best for minimizing downtime since they can take longer to restore.
upvoted 0 times
...
Kami
5 months ago
I'm not too sure about this question. All the options seem plausible, and I'm not familiar enough with Google Cloud's recommended practices for database migrations. I think I'll need to review the course materials again and try to understand the tradeoffs between the different approaches. Hopefully, I can narrow it down from there.
upvoted 0 times
...
Oliva
5 months ago
Okay, I've got this! The answer is clearly option D. Enabling high availability on the Cloud SQL instance will make it regional, which should meet the requirements for zero data loss and minimal downtime. Google probably recommends this as the best practice for mission-critical databases. I'm feeling confident about this one.
upvoted 0 times
...
Barbra
5 months ago
Hmm, I'm a bit confused on this one. The question is asking about Google-recommended practices, but none of these options seem to mention anything specific about Google Cloud. I'm not sure if I'm missing something there. I might need to do some more research on the best practices for migrating to Cloud SQL.
upvoted 0 times
...
Son
5 months ago
This seems like a tricky question, but I think I have a good strategy. I'll focus on the key requirements - zero data loss and less than 5 minutes of downtime. That rules out option A, since restoring from backups could take longer. I'm leaning towards option B, since a CDC pipeline should allow for real-time replication.
upvoted 0 times
...
Michell
5 months ago
Hmm, the labels in the output don't exactly match the options in the code. I'll need to double-check the syntax to make sure I get this right.
upvoted 0 times
...
Cherilyn
5 months ago
This decision table looks pretty straightforward. I think I can collapse it and count the remaining cases without too much trouble.
upvoted 0 times
...
Lettie
5 months ago
Okay, I think I've got this. The question is asking which objectives focus on the BCM activities that support people-and performance-oriented objectives, so I'll need to consider how the different objective types relate to BCM.
upvoted 0 times
...
Dorothy
5 months ago
Hmm, I'm a bit unsure about the differences between compensating service transactions and atomic service transactions. I'll need to think this through carefully.
upvoted 0 times
...
Shelton
5 months ago
Okay, let me break this down step-by-step. I need to consider the time period and the scale of international terrorist incidents to determine the closest answer.
upvoted 0 times
...
Viola
5 months ago
I'm not sure about this one. The wording is a bit tricky. I know selfdestruct can send ether, but the question is specifically about a contract without a payable fallback. Hmm, I'll have to re-read it and see if I can figure out the right approach.
upvoted 0 times
...
Jina
2 years ago
Ah, the age-old question of how to migrate a mission-critical database to the cloud. I say go with B - it's the only option that guarantees zero data loss!
upvoted 0 times
...
Fatima
2 years ago
I'm going with B. A CDC pipeline is the way to go for mission-critical databases. Plus, it's a Google-recommended practice, so you can't go wrong.
upvoted 0 times
...
Kristeen
2 years ago
Hmm, I'm not sure. C seems like it might work, but I'm worried about the potential for data loss during the failover process.
upvoted 0 times
Sabine
2 years ago
C) Create a read replica in another region, and promote the read replica if a failure occurs.
upvoted 0 times
...
Percy
2 years ago
B) Build a change data capture (CDC) pipeline to read transactions from the primary instance, and replicate them to a secondary instance.
upvoted 0 times
...
Delisa
2 years ago
A) Take nightly snapshots of the primary database instance, and restore them in a secondary zone.
upvoted 0 times
...
...
Jillian
2 years ago
D is the way to go! Enabling HA for the database will make it regional and provide the required fault tolerance.
upvoted 0 times
Estrella
2 years ago
Agreed, enabling high availability for the database is the way to go.
upvoted 0 times
...
Kelvin
2 years ago
D is definitely the best option for ensuring fault tolerance.
upvoted 0 times
...
Fidelia
2 years ago
Agreed, enabling high availability for the database is the way to go.
upvoted 0 times
...
Goldie
2 years ago
D is definitely the best option for ensuring fault tolerance.
upvoted 0 times
...
...
Gail
2 years ago
I think B is the correct answer. A CDC pipeline will ensure no transactions are lost, and it follows Google's recommended practices for high availability.
upvoted 0 times
Bong
2 years ago
Yes, a CDC pipeline is the way to go for maintaining high availability and following Google's recommendations.
upvoted 0 times
...
Pedro
2 years ago
I agree, B is the best option for ensuring no transaction loss during a zonal failure.
upvoted 0 times
...
...

Save Cancel