Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 4 Question 84 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 84
Topic #: 4
[All Professional Machine Learning Engineer Questions]

You work for a bank. You have created a custom model to predict whether a loan application should be flagged for human review. The input features are stored in a BigQuery table. The model is performing well and you plan to deploy it to production. Due to compliance requirements the model must provide explanations for each prediction. You want to add this functionality to your model code with minimal effort and provide explanations that are as accurate as possible What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Estrella
14 days ago
Option A sounds like the 'Easy Mode' for getting explainability. I hope it doesn't come with a 'Pay-to-Win' microtransaction plan though.
upvoted 0 times
...
Franchesca
1 months ago
Option D is my pick. Updating the custom serving container to include sampled Shapley-based explanations seems like the most accurate way to explain the model's predictions.
upvoted 0 times
Danica
2 days ago
That sounds like the most efficient way to meet compliance requirements while ensuring accurate explanations.
upvoted 0 times
...
Dominga
3 days ago
Agreed, updating the custom serving container with sampled Shapley-based explanations is the way to go.
upvoted 0 times
...
Gabriele
8 days ago
I think option D is the best choice too. It will provide accurate explanations for each prediction.
upvoted 0 times
...
...
Lai
1 months ago
Option B looks promising, using BigQuery ML's EXPLAIN_PREDICT method. It's a good way to get explanations without modifying the model too much.
upvoted 0 times
Tina
8 days ago
Felix: Let's go ahead and implement the BigQuery ML deep neural network model with the EXPLAIN_PREDICT method. It should help us meet the compliance requirements easily.
upvoted 0 times
...
Jesusa
20 days ago
User 3: I think we should go with Option B then. It's a simple solution that can provide the necessary explanations for compliance requirements.
upvoted 0 times
...
Felix
22 days ago
User 2: I agree. It's important to have accurate explanations for each prediction, and using the EXPLAIN_PREDICT method in BigQuery ML can help with that.
upvoted 0 times
...
Kaycee
1 months ago
User 1: Option B sounds like a good choice. It seems like a straightforward way to add explanations to the model.
upvoted 0 times
...
...
Kimbery
1 months ago
I'm leaning towards Option C. Uploading the custom model to Vertex AI Model Registry and configuring feature-based attribution using sampled Shapley allows me to keep more control over the model.
upvoted 0 times
...
Teddy
2 months ago
I'm not sure about option A. I think option C might provide more accurate explanations with feature-based attribution.
upvoted 0 times
...
Nakisha
2 months ago
I agree with Jacquelyne. Using AutoML with Vertex Explainable AI seems like the most efficient way to meet compliance requirements.
upvoted 0 times
...
Ronald
2 months ago
Option A seems like the easiest way to get explainability with minimal effort. AutoML Tabular models come with built-in Vertex Explainable AI, so that's an attractive choice.
upvoted 0 times
Raina
28 days ago
Yeah, uploading the custom model to Vertex AI Model Registry and configuring feature-based attribution with sampled Shapley sounds like a good approach for accurate explanations.
upvoted 0 times
...
Rickie
1 months ago
Creating a BigQuery ML deep neural network model with the EXPLAIN_PREDICT method could also work, but option A seems more straightforward.
upvoted 0 times
...
Yuki
1 months ago
I agree, using the integrated Vertex Explainable AI would make it easier to add explanations to the model code.
upvoted 0 times
...
Valda
1 months ago
I think option A is the best choice. AutoML Tabular models with Vertex Explainable AI would provide accurate explanations with minimal effort.
upvoted 0 times
...
...
Jacquelyne
2 months ago
I think option A sounds like a good choice for adding explanations to the model.
upvoted 0 times
...

Save Cancel