New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Associate Cloud Engineer Exam - Topic 2 Question 88 Discussion

Actual exam question for Google's Associate Cloud Engineer exam
Question #: 88
Topic #: 2
[All Associate Cloud Engineer Questions]

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

A managed instance group (MIG) is a group of identical virtual machines (VMs) that you can manage as a single entity. You can use a MIG to deploy and maintain a stateless application that runs directly on VMs. A MIG can automatically scale the number of VMs based on the load or a schedule. A MIG can also automatically heal the VMs if they become unhealthy or unavailable. A MIG is suitable for applications that need to run on VMs rather than containers or serverless platforms.

B is incorrect because Kubernetes Engine is a managed service for running containerized applications on a cluster of nodes. It is not necessary to use Kubernetes Engine if the application does not use containers and can run directly on VMs.

C is incorrect because Cloud Functions is a serverless platform for running event-driven code in response to triggers. It is not suitable for applications that need to run continuously and handle HTTP requests.

D is incorrect because Cloud Run is a serverless platform for running stateless containerized applications. It is not suitable for applications that do not use containers and can run directly on VMs.


Managed instance groups documentation

Choosing a compute option for Google Cloud

Contribute your Thoughts:

0/2000 characters
Roosevelt
3 months ago
Not sure about that nodeSelector thing, sounds complicated.
upvoted 0 times
...
Onita
3 months ago
Definitely go with D, it’s the most efficient way!
upvoted 0 times
...
Ernie
4 months ago
Wait, do we really need a whole new cluster for just one team?
upvoted 0 times
...
Hailey
4 months ago
I think recreating all nodes (Option B) is overkill.
upvoted 0 times
...
Yuette
4 months ago
Option D is the best choice for adding GPU support.
upvoted 0 times
...
Shayne
4 months ago
Creating a separate cluster for the ML team sounds like a lot of overhead. I hope that's not the best option here.
upvoted 0 times
...
Shantay
5 months ago
I'm leaning towards option D, but I feel like I need to double-check if the nodeSelector syntax is correct.
upvoted 0 times
...
Nadine
5 months ago
I think recreating all the nodes seems excessive. We practiced a question where we just added a new node pool instead.
upvoted 0 times
...
Terrilyn
5 months ago
I remember something about adding annotations for GPU access, but I'm not sure if it's just for pods or if it applies to the whole cluster.
upvoted 0 times
...
Nieves
5 months ago
I'm not sure about the "accelerator" annotation - does that actually work with GKE? I'd want to double-check the documentation on that one.
upvoted 0 times
...
Golda
5 months ago
Okay, I think I've got it. We need to add a new node pool with the right GPU hardware and then have the ML team target those nodes. That should be the most efficient approach.
upvoted 0 times
...
Wendell
5 months ago
Hmm, I'm a bit confused. Do we need to create a whole new cluster just for the ML team? That seems like a lot of extra work.
upvoted 0 times
...
Cyril
5 months ago
This seems pretty straightforward. I think the key is to add GPU-enabled nodes to the cluster without having to recreate the entire thing.
upvoted 0 times
...
Jolene
5 months ago
I'm pretty confident the answer is D. Combining the administrative services seems like the most effective way to lower costs with 24-hour coverage.
upvoted 0 times
...
Tequila
5 months ago
I've got this! The valid WebSocket message types are Binary Message and Ping Message. I'm feeling good about this one.
upvoted 0 times
...
Stevie
10 months ago
Looks like the ML team is about to get a serious GPU boost. Time to start my own AI side hustle!
upvoted 0 times
...
Kaitlyn
10 months ago
I wonder if the ML team will ask for the latest and greatest GPUs next... Gotta stay ahead of the curve!
upvoted 0 times
Gianna
8 months ago
I wonder if the ML team will ask for the latest and greatest GPUs next... Gotta stay ahead of the curve!
upvoted 0 times
...
Odette
8 months ago
D) Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
upvoted 0 times
...
Yuonne
9 months ago
C) Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.
upvoted 0 times
...
Elden
9 months ago
A) Ask your ML team to add the ''accelerator: gpu'' annotation to their pod specification.
upvoted 0 times
...
...
Agustin
10 months ago
Adding a new, GPU-enabled node pool to the existing GKE cluster and using the nodeSelector seems like the best option to me. It's targeted and cost-effective.
upvoted 0 times
...
Tayna
10 months ago
Creating a dedicated Kubernetes cluster with GPU-enabled nodes sounds like a good idea, but it might be overkill for a non-production workload.
upvoted 0 times
...
Tammy
10 months ago
Recreating all the nodes to enable GPUs on all of them sounds like a lot of effort and cost. I don't think that's the most efficient approach.
upvoted 0 times
Julianna
9 months ago
A) That sounds like a more efficient way to provide access to GPUs for the ML team without recreating all the nodes.
upvoted 0 times
...
Diane
9 months ago
D) Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
upvoted 0 times
...
Francoise
9 months ago
A) Ask your ML team to add the ''accelerator: gpu'' annotation to their pod specification.
upvoted 0 times
...
...
Tasia
11 months ago
I'm not sure, I think option C could also work. Creating a separate Kubernetes cluster dedicated to the ML team might simplify management.
upvoted 0 times
...
Alecia
11 months ago
The 'accelerator: gpu' annotation seems like a quick and easy solution, but I'm not sure if that will work for the specific Nvidia Tesla P100 GPUs.
upvoted 0 times
Irene
10 months ago
D) Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
upvoted 0 times
...
Tomas
10 months ago
A) Ask your ML team to add the ''accelerator: gpu'' annotation to their pod specification.
upvoted 0 times
...
...
Oliva
11 months ago
I agree with Owen. Option D seems like the most efficient way to provide the ML team with the GPUs they need.
upvoted 0 times
...
Owen
11 months ago
I think option D is the best choice. It allows us to add GPU-enabled nodes to the existing GKE cluster.
upvoted 0 times
...

Save Cancel