Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft AB-100 Exam - Topic 3 Question 4 Discussion

Actual exam question for Microsoft's AB-100 exam
Question #: 4
Topic #: 3
[All AB-100 Questions]

A company has an Al solution named Solution1 that is deployed to the production environment. Solution! uses an Azure OpenAI model to generate marketing emails for existing customers.

During an internal review, you identify that Solution1 creates different emails depending on the customers' traits.

You need to recommend a strategy to mitigate the bias. The strategy must adhere to Microsoft responsible Al principles.

What should you recommend?

Show Suggested Answer Hide Answer
Suggested Answer: A

The scenario describes a deployed AI solution using Azure OpenAI that exhibits bias (creating disparate outcomes based on customer traits). This directly impacts the Fairness principle of Microsoft's Responsible AI framework.

Why 'Modify the system instructions' is the Correct Strategy:

Direct Control via System Metaprompts: In large language model (LLM) applications like those powered by Azure OpenAI, the system instructions (or system message) define the behavior, constraints, and tone of the model. By modifying these instructions, you can explicitly direct the model to treat all customer segments equitably and ignore specific sensitive traits when drafting marketing content.

Mitigation without Re-engineering: * Option B and D (Training/Retraining): Azure OpenAI models are foundation models. Most companies use them via API and do not have access to the original 'training dataset' to modify it. While fine-tuning is possible, it is significantly more expensive and complex than prompt engineering.

Option C (Randomization): Randomization does not solve bias; it creates inconsistency and potentially irrelevant content, violating the Reliability and Safety principle.

Alignment with Responsible AI: Microsoft's documentation on Fairness recommends 'Instructional Mitigation.' This involves adding specific rules to the system prompt, such as: 'You must ensure the tone and value proposition of the email remain consistent across all demographic groups' or 'Do not use customer traits such as age or gender to influence the core marketing message.'


Contribute your Thoughts:

0/2000 characters

Currently there are no comments in this discussion, be the first to comment!


Save Cancel