Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 3 Question 92 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 92
Topic #: 3
[All Professional Machine Learning Engineer Questions]

Your organization manages an online message board A few months ago, you discovered an increase in toxic language and bullying on the message board. You deployed an automated text classifier that flags certain comments as toxic or harmful. Now some users are reporting that benign comments referencing their religion are being misclassified as abusive Upon further inspection, you find that your classifier's false positive rate is higher for comments that reference certain underrepresented religious groups. Your team has a limited budget and is already overextended. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

The best way to operationalize your training process is to use Vertex AI Pipelines, which allows you to create and run scalable, portable, and reproducible workflows for your ML models. Vertex AI Pipelines also integrates with Vertex AI Metadata, which tracks the provenance, lineage, and artifacts of your ML models. By using a Vertex AI CustomTrainingJobOp component, you can train your model using the same code as in your Jupyter notebook. By using a ModelUploadOp component, you can upload your trained model to Vertex AI Model Registry, which manages the versions and endpoints of your models. By using Cloud Scheduler and Cloud Functions, you can trigger your Vertex AI pipeline to run weekly, according to your plan.Reference:

Vertex AI Pipelines documentation

Vertex AI Metadata documentation

Vertex AI CustomTrainingJobOp documentation

ModelUploadOp documentation

Cloud Scheduler documentation

[Cloud Functions documentation]


Contribute your Thoughts:

Mollie
2 months ago
Replacing the model? Sounds like a lot of work. Why not just teach the AI to recognize sarcasm and irony? Problem solved!
upvoted 0 times
Nathalie
19 days ago
User 3: Raising the threshold for what is considered toxic could reduce false positives without replacing the model.
upvoted 0 times
...
Launa
20 days ago
C) Replace your model with a different text classifier.
upvoted 0 times
...
Aimee
30 days ago
User 2: Removing the model and using human moderation might be more effective in the long run.
upvoted 0 times
...
Silva
30 days ago
B) Remove the model and replace it with human moderation.
upvoted 0 times
...
Mabelle
1 months ago
A) Add synthetic training data where those phrases are used in non-toxic ways
upvoted 0 times
...
Quinn
1 months ago
User 1: Adding synthetic training data could help improve the accuracy of the classifier.
upvoted 0 times
...
...
Larue
2 months ago
Synthetic data, huh? Sounds like a job for the AI squad. Though I'd keep an eye on the 'robot overlord' situation, just in case.
upvoted 0 times
...
Tandra
2 months ago
Hmm, option D sounds like the most practical approach given the budget constraints. Why make life harder for the mods, right?
upvoted 0 times
Royce
1 months ago
User 2: Yeah, it's important to make sure the mods aren't overwhelmed with unnecessary flagged comments.
upvoted 0 times
...
Becky
2 months ago
User 1: I agree, option D seems like the best choice to reduce false positives.
upvoted 0 times
...
...
Galen
2 months ago
I agree with Tracey, raising the threshold could be a simpler solution.
upvoted 0 times
...
Leigha
2 months ago
Oh boy, the old 'toxic language' vs. 'religious sensitivity' conundrum. I bet this one's a real headache for the dev team!
upvoted 0 times
Francisca
1 months ago
C) Replace your model with a different text classifier.
upvoted 0 times
...
Basilia
1 months ago
B) Remove the model and replace it with human moderation.
upvoted 0 times
...
Coral
1 months ago
A) Add synthetic training data where those phrases are used in non-toxic ways
upvoted 0 times
...
...
Tracey
2 months ago
But wouldn't it be better to raise the threshold for comments instead?
upvoted 0 times
...
Adrianna
2 months ago
I think we should add synthetic training data for those phrases.
upvoted 0 times
...

Save Cancel