Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Associate Cloud Engineer Topic 2 Question 88 Discussion

Actual exam question for Google's Associate Cloud Engineer exam
Question #: 88
Topic #: 2
[All Associate Cloud Engineer Questions]

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

A managed instance group (MIG) is a group of identical virtual machines (VMs) that you can manage as a single entity. You can use a MIG to deploy and maintain a stateless application that runs directly on VMs. A MIG can automatically scale the number of VMs based on the load or a schedule. A MIG can also automatically heal the VMs if they become unhealthy or unavailable. A MIG is suitable for applications that need to run on VMs rather than containers or serverless platforms.

B is incorrect because Kubernetes Engine is a managed service for running containerized applications on a cluster of nodes. It is not necessary to use Kubernetes Engine if the application does not use containers and can run directly on VMs.

C is incorrect because Cloud Functions is a serverless platform for running event-driven code in response to triggers. It is not suitable for applications that need to run continuously and handle HTTP requests.

D is incorrect because Cloud Run is a serverless platform for running stateless containerized applications. It is not suitable for applications that do not use containers and can run directly on VMs.


Managed instance groups documentation

Choosing a compute option for Google Cloud

Contribute your Thoughts:

Tasia
5 days ago
I'm not sure, I think option C could also work. Creating a separate Kubernetes cluster dedicated to the ML team might simplify management.
upvoted 0 times
...
Alecia
6 days ago
The 'accelerator: gpu' annotation seems like a quick and easy solution, but I'm not sure if that will work for the specific Nvidia Tesla P100 GPUs.
upvoted 0 times
...
Oliva
8 days ago
I agree with Owen. Option D seems like the most efficient way to provide the ML team with the GPUs they need.
upvoted 0 times
...
Owen
14 days ago
I think option D is the best choice. It allows us to add GPU-enabled nodes to the existing GKE cluster.
upvoted 0 times
...

Save Cancel