Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Associate Cloud Engineer Topic 2 Question 88 Discussion

Actual exam question for Google's Associate Cloud Engineer exam
Question #: 88
Topic #: 2
[All Associate Cloud Engineer Questions]

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

A managed instance group (MIG) is a group of identical virtual machines (VMs) that you can manage as a single entity. You can use a MIG to deploy and maintain a stateless application that runs directly on VMs. A MIG can automatically scale the number of VMs based on the load or a schedule. A MIG can also automatically heal the VMs if they become unhealthy or unavailable. A MIG is suitable for applications that need to run on VMs rather than containers or serverless platforms.

B is incorrect because Kubernetes Engine is a managed service for running containerized applications on a cluster of nodes. It is not necessary to use Kubernetes Engine if the application does not use containers and can run directly on VMs.

C is incorrect because Cloud Functions is a serverless platform for running event-driven code in response to triggers. It is not suitable for applications that need to run continuously and handle HTTP requests.

D is incorrect because Cloud Run is a serverless platform for running stateless containerized applications. It is not suitable for applications that do not use containers and can run directly on VMs.


Managed instance groups documentation

Choosing a compute option for Google Cloud

Contribute your Thoughts:

Stevie
18 days ago
Looks like the ML team is about to get a serious GPU boost. Time to start my own AI side hustle!
upvoted 0 times
...
Kaitlyn
20 days ago
I wonder if the ML team will ask for the latest and greatest GPUs next... Gotta stay ahead of the curve!
upvoted 0 times
...
Agustin
29 days ago
Adding a new, GPU-enabled node pool to the existing GKE cluster and using the nodeSelector seems like the best option to me. It's targeted and cost-effective.
upvoted 0 times
...
Tayna
1 months ago
Creating a dedicated Kubernetes cluster with GPU-enabled nodes sounds like a good idea, but it might be overkill for a non-production workload.
upvoted 0 times
...
Tammy
1 months ago
Recreating all the nodes to enable GPUs on all of them sounds like a lot of effort and cost. I don't think that's the most efficient approach.
upvoted 0 times
...
Tasia
2 months ago
I'm not sure, I think option C could also work. Creating a separate Kubernetes cluster dedicated to the ML team might simplify management.
upvoted 0 times
...
Alecia
2 months ago
The 'accelerator: gpu' annotation seems like a quick and easy solution, but I'm not sure if that will work for the specific Nvidia Tesla P100 GPUs.
upvoted 0 times
Irene
1 months ago
D) Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
upvoted 0 times
...
Tomas
1 months ago
A) Ask your ML team to add the ''accelerator: gpu'' annotation to their pod specification.
upvoted 0 times
...
...
Oliva
2 months ago
I agree with Owen. Option D seems like the most efficient way to provide the ML team with the GPUs they need.
upvoted 0 times
...
Owen
2 months ago
I think option D is the best choice. It allows us to add GPU-enabled nodes to the existing GKE cluster.
upvoted 0 times
...

Save Cancel