Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Cloud DevOps Engineer Exam - Topic 4 Question 84 Discussion

Actual exam question for Google's Professional Cloud DevOps Engineer exam
Question #: 84
Topic #: 4
[All Professional Cloud DevOps Engineer Questions]

Your company runs applications in Google Kubernetes Engine (GKE). Several applications rely on ephemeral volumes. You noticed some applications were unstable due to the DiskPressure node condition on the worker nodes. You need to identify which Pods are causing the issue, but you do not have execute access to workloads and nodes. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

The correct answer is C)

Confirming that the Jenkins VM instance has an attached service account with the appropriate Identity and Access Management (IAM) permissions is the best way to ensure that the Terraform Jenkins instance is authorized to create Google Cloud resources. This follows the Google-recommended practice of using service accounts to authenticate and authorize applications running on Google Cloud1. Service accounts are associated with private keys that can be used to generate access tokens for Google Cloud APIs2. By attaching a service account to the Jenkins VM instance, Terraform can use the Application Default Credentials (ADC) strategy to automatically find and use the service account credentials3.

Answer A is incorrect because the auth application-default command is used to obtain user credentials, not service account credentials. User credentials are not recommended for applications running on Google Cloud, as they are less secure and less scalable than service account credentials1.

Answer B is incorrect because it involves downloading and copying the secret key value of the service account, which is not a secure or reliable way of managing credentials. The secret key value should be kept private and not exposed to any other system or user2. Moreover, setting the GOOGLE environment variable on the Jenkins server is not a valid way of providing credentials to Terraform. Terraform expects the credentials to be either in a file pointed by the GOOGLE_APPLICATION_CREDENTIALS environment variable, or in a provider block with the credentials argument3.

Answer D is incorrect because it involves using the Terraform module for Secret Manager, which is a service that stores and manages sensitive data such as API keys, passwords, and certificates. While Secret Manager can be used to store and retrieve credentials, it is not necessary or sufficient for authorizing the Terraform Jenkins instance. The Terraform Jenkins instance still needs a service account with the appropriate IAM permissions to access Secret Manager and other Google Cloud resources.


Contribute your Thoughts:

0/2000 characters
Serina
6 days ago
I think B is too vague, we need specifics!
upvoted 0 times
...
Queenie
12 days ago
A is the best option to check disk usage metrics.
upvoted 0 times
...
Lang
17 days ago
I'm leaning towards option C since it talks about locating Pods with emptyDir volumes, but I don't remember if we can actually run commands without access.
upvoted 0 times
...
Xuan
23 days ago
I feel like we practiced a question similar to this, but I can't recall if we used df -h or du -sh for checking disk usage in Pods.
upvoted 0 times
...
Dortha
28 days ago
I think option A sounds familiar because it mentions checking the used bytes metric, which could help identify the Pods causing DiskPressure.
upvoted 0 times
...
Lon
1 month ago
I remember we discussed using Metrics Explorer to check metrics related to disk usage, but I'm not sure if it's specifically for ephemeral storage.
upvoted 0 times
...
Allene
1 month ago
I'm feeling pretty confident about this one. The key is to focus on the Pods with emptyDir volumes, as the question specifically mentions that the applications rely on ephemeral volumes. Using the df-h command to check the disk usage should do the trick.
upvoted 0 times
...
Han
1 month ago
Okay, I think I've got it. Since we can't access the nodes directly, the best approach is to locate all the Pods with emptyDir volumes and use the du -sh * command to measure the disk usage. That should give us the information we need.
upvoted 0 times
...
Daniel
1 month ago
Hmm, I'm a bit confused. The question says we don't have execute access to the workloads and nodes, so I'm not sure if the Metrics Explorer approach will work. I might need to try a different strategy.
upvoted 0 times
...
Ira
1 month ago
This seems like a straightforward question. I'll check the node/ephemeral_storage/used_bytes metric in Metrics Explorer to identify the Pods causing the DiskPressure issue.
upvoted 0 times
...
Cecilia
6 months ago
Option A is the clear winner here. Checking the node/ephemeral_storage/used_bytes metric is the most direct way to pinpoint the problem Pods.
upvoted 0 times
Bok
5 months ago
Once we identify the Pods causing the issue, we can take the necessary steps to resolve the DiskPressure node condition.
upvoted 0 times
...
Andrew
5 months ago
I agree, let's check the node/ephemeral_storage/used_bytes metric to find the problematic Pods.
upvoted 0 times
...
Telma
6 months ago
Option A is definitely the way to go. That metric will give us the information we need.
upvoted 0 times
...
...
Rutha
6 months ago
B is too vague. We need to target the specific metric that will help us identify the Pods causing the DiskPressure issue.
upvoted 0 times
...
Alaine
7 months ago
Haha, looks like we need to be disk detectives! Option C might work, but I'd rather not mess with the worker nodes directly if I don't have to.
upvoted 0 times
...
Lorean
7 months ago
I think option D is the way to go. Using du -sh * will give us a detailed breakdown of disk usage for each Pod with an emptyDir volume.
upvoted 0 times
Carol
6 months ago
Consider scaling down the number of replicas for Pods with high disk usage to reduce pressure on the nodes.
upvoted 0 times
...
Elmira
6 months ago
Look for any Pods with high disk usage by checking the output of kubectl exec -- du -sh *.
upvoted 0 times
...
Venita
6 months ago
Use kubectl describe node to get more information about the DiskPressure condition.
upvoted 0 times
...
Melodie
6 months ago
Check the kubelet logs for any errors related to disk pressure.
upvoted 0 times
...
...
Merissa
7 months ago
I'm not sure, maybe we should also consider locating all the Pods with emptyDir volumes and use the df -h command to measure volume disk usage.
upvoted 0 times
...
Clement
7 months ago
I agree with Brynn, that seems like the best way to identify the Pods causing the issue.
upvoted 0 times
...
Brynn
7 months ago
I think we should check the node/ephemeral_storage/used_bytes metric by using Metrics Explorer.
upvoted 0 times
...
Erick
7 months ago
A seems like the most appropriate option here. We need to check the specific metric related to ephemeral storage usage on the nodes, not just a generic metric.
upvoted 0 times
Denny
6 months ago
B) I agree, we need to look at the specific metric related to ephemeral storage usage.
upvoted 0 times
...
Lavelle
6 months ago
A) Check the node/ephemeral_storage/used_bytes metric by using Metrics Explorer.
upvoted 0 times
...
...

Save Cancel