Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Cloud Architect Exam - Topic 4 Question 76 Discussion

Actual exam question for Google's Professional Cloud Architect exam
Question #: 76
Topic #: 4
[All Professional Cloud Architect Questions]

Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.

The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage.

The reason for using Cloud Storage as the destination for the logs is that the requirement in question requires setting up a lifecycle based on the storage period.

In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.

If the data is to be used for active queries, we can use BigQuery's Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-optimal solution.

Therefore, the correct answer is as follows

1. Install the Cloud Logging agent on all instances.

Create a sync that exports the logs to the region's Cloud Storage bucket.

3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. 4.

4. set up a bucket-level retention policy using bucket locking.'


Contribute your Thoughts:

0/2000 characters
Garry
4 months ago
Coldline? Is that really the best choice for logs? Sounds risky.
upvoted 0 times
...
Janet
4 months ago
Definitely need that retention policy for compliance!
upvoted 0 times
...
Stephaine
4 months ago
Wait, why would you use BigQuery for logs? That sounds expensive.
upvoted 0 times
...
Keith
4 months ago
I agree, Cloud Storage is great for long-term retention!
upvoted 0 times
...
Vincenza
4 months ago
B seems like the best option for cost efficiency.
upvoted 0 times
...
Buffy
5 months ago
I feel like the cron job approach in option C might be too manual; I thought we learned that using a sink is more efficient for log management.
upvoted 0 times
...
Nickole
5 months ago
I practiced a similar question where we had to balance cost and compliance, but I can't recall if we used Coldline or Archive storage for logs.
upvoted 0 times
...
Timothy
5 months ago
I think option B sounds familiar; using Cloud Storage with lifecycle rules seems like a cost-effective way to manage logs after 30 days.
upvoted 0 times
...
Marvel
5 months ago
I remember we discussed using BigQuery for active querying, but I'm not sure if it's the best choice for long-term storage.
upvoted 0 times
...
Hyman
5 months ago
This is a great example of where you really need to carefully read through all the details and requirements. It would be easy to just pick an option that looks good on the surface, but taking the time to fully understand everything that's needed is key to selecting the right solution.
upvoted 0 times
...
Dominic
5 months ago
I'm a little unsure about the BigQuery approach in options A and C. While it might be convenient for querying, I'm not sure it's the most cost-effective solution for long-term storage of 1TB of data per day. The Cloud Storage options seem like they'd be better aligned with the requirements.
upvoted 0 times
...
My
5 months ago
I think option B looks like the best approach here. Exporting the logs to a regional Cloud Storage bucket and then moving them to Coldline after 30 days seems like it would meet all the requirements while keeping costs down. The retention policy at the bucket level is a nice touch too.
upvoted 0 times
...
Alesia
5 months ago
This seems like a pretty straightforward question, but I want to make sure I understand all the requirements before I start answering. The key things I need to focus on are compliance, cost-effectiveness, and following Google's recommended practices.
upvoted 0 times
...
Sanda
5 months ago
Okay, let's break this down step-by-step. First, we need to make sure the logs are being exported properly and retained for the required 2 years. Then we need to figure out the best way to handle the active query period and the long-term storage.
upvoted 0 times
...
Thersa
5 months ago
I'm a bit confused by the wording of the question. I'll need to re-read it a few times to make sure I understand what they're asking.
upvoted 0 times
...
Nobuko
6 months ago
I think the first step is to create groups of interest. That seems like the logical starting point to configure the data policies.
upvoted 0 times
...
Myra
6 months ago
Hmm, this looks like a tricky one. I'll need to think carefully about the different encapsulation and protocol options for creating an xconnect in AToM.
upvoted 0 times
...
Lorean
6 months ago
I feel like this might be one of those concepts where the wording can trip you up. Isn't there also a case where redundancy isn't necessary?
upvoted 0 times
...
Dana
6 months ago
This question seems to be testing our understanding of how call bridges connect to databases in a Cisco Meeting Server deployment. I'll need to think carefully about the different options and how they relate to the concept of a resilient and scalable setup.
upvoted 0 times
...
Kent
2 years ago
I personally prefer option D. It combines uploading logs into a Cloud Storage bucket with exporting into a regional Storage bucket and moving to Coldline after 30 days. It covers all bases.
upvoted 0 times
...
Myrtie
2 years ago
That's a valid point, Eleni. But what about the retention policy and lock requirement? Option B seems to address that effectively.
upvoted 0 times
...
Eleni
2 years ago
I disagree, I believe option A is better. Exporting logs into a partitioned BigQuery table with time partitioning expiration set to 30 days makes querying easier and more organized.
upvoted 0 times
...
Myrtie
2 years ago
I think option B is the best choice. Storing logs in a regional Cloud Storage bucket and moving them to Coldline after 30 days seems efficient and cost-effective.
upvoted 0 times
...
Lyla
2 years ago
Good point, Shaunna. The Cloud Ops agent might be overkill here. Perhaps we could explore a simpler solution like option D, where we just write a daily cron job to upload the logs to a Cloud Storage bucket and then use the lifecycle rules to manage the data.
upvoted 0 times
...
Shaunna
2 years ago
I agree with you both. Option B seems the most comprehensive and aligned with the requirements. The only thing I'm a little unsure about is the need to install the Cloud Ops agent on all instances. Is that really necessary?
upvoted 0 times
...
Tamesha
2 years ago
Option B definitely looks like the way to go. Exporting the logs to a regional Cloud Storage bucket, tiering the data to Coldline after a month, and setting a retention policy at the bucket level - that should do the trick.
upvoted 0 times
Gayla
2 years ago
B) 4. Configure a retention policy at the bucket level to create a lock.
upvoted 0 times
...
Christiane
2 years ago
B) 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.
upvoted 0 times
...
Bulah
2 years ago
B) 2. Create a sink to export logs into a regional Cloud Storage bucket.
upvoted 0 times
...
In
2 years ago
B) 1. Install the Cloud Ops agent on all instances.
upvoted 0 times
...
...
Cammy
2 years ago
Hmm, this is a tricky one. We need to find a solution that is compliant, cost-effective, and aligns with Google's best practices. I'm leaning towards option B, as it seems to cover all the requirements.
upvoted 0 times
...

Save Cancel