Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Database Engineer Topic 2 Question 46 Discussion

Actual exam question for Google's Professional Cloud Database Engineer exam
Question #: 46
Topic #: 2
[All Professional Cloud Database Engineer Questions]

Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. The database will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

https://cloud.google.com/sql/docs/mysql/backup-recovery/backing-up#locationbackups You can use a custom location for on-demand and automatic backups. For a complete list of valid location values, see the Instance locations.


Contribute your Thoughts:

Sheron
27 days ago
Maybe the database is running out of space because it's storing all the memes the IT team has been sharing. Gotta love those cloud-based comedy clubs!
upvoted 0 times
Renato
1 days ago
C) In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.
upvoted 0 times
...
Annabelle
3 days ago
A) In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.
upvoted 0 times
...
...
Ruthann
29 days ago
Option B? Really? Verifying the new space with fdisk? That's so 90s. This is the cloud, folks, let's live in the present!
upvoted 0 times
...
Malinda
1 months ago
Option A is the way to go! Resizing the disk and extending the file system is a simple one-step solution. No need to complicate things.
upvoted 0 times
Kaitlyn
1 days ago
User 3: Thanks for the advice, I'll go with option A then.
upvoted 0 times
...
Tawanna
19 days ago
User 2: I agree, it's the simplest solution.
upvoted 0 times
...
Edison
22 days ago
User 1: Option A is definitely the best choice.
upvoted 0 times
...
...
Gracie
1 months ago
I'd go with option D. Creating a new disk and moving the files over seems like the safest and most straightforward approach.
upvoted 0 times
Luke
2 days ago
I agree, creating a new disk and transferring the files seems like the best way to avoid downtime.
upvoted 0 times
...
Kiley
6 days ago
Option D sounds like a good plan. Moving the files to a new disk is a safe bet.
upvoted 0 times
...
...
Fernanda
2 months ago
Option C looks like the best choice to me. Avoiding downtime is the key requirement, and that's exactly what the snapshot and restore process does.
upvoted 0 times
Cary
11 days ago
I agree, using a snapshot to restore to a larger disk will minimize downtime.
upvoted 0 times
...
Brande
22 days ago
Option C looks like the best choice to me.
upvoted 0 times
...
...
Mi
2 months ago
I'm not sure about option A. I think option C might be a safer choice as it involves creating a snapshot and restoring it to a new larger disk to avoid any potential issues.
upvoted 0 times
...
Johnna
2 months ago
I agree with Jacki. Option A seems like the most efficient solution to avoid any downtime for our critical production database.
upvoted 0 times
...
Jacki
2 months ago
I think option A is the best choice because it allows us to increase the size of the persistent disk and extend the disk without any downtime.
upvoted 0 times
...

Save Cancel