Your company is developing an AI-powered customer support agent. You need to ensure that the solution follows Microsoft responsible AI principles. Which two actions should you perform? Select the two BEST answers. Each correct answer presents part of the solution.
To align an AI customer support agent with Microsoft's Responsible AI principles, two high-impact actions are fairness/inclusiveness validation and transparency to users. B is correct because testing for inclusive and culturally sensitive responses directly supports fairness and helps reduce harm. In practice, you evaluate responses across diverse user personas, languages/dialects, accessibility scenarios, and sensitive contexts. You look for biased assumptions, stereotyping, exclusionary language, and disparate quality of service. This also implies ongoing monitoring because model behavior can drift as prompts, knowledge sources, and user inputs evolve.
E is correct because a clear disclaimer supports transparency: customers should know they are interacting with an AI system, understand the type of assistance it can provide, and know what to do if the response is incorrect or they need a human. A disclosure is also a practical risk-control that reduces overreliance and sets expectations about limitations.
The other options are not best for Responsible AI alignment: A (retain all conversations) can conflict with privacy/data minimization; retention must be justified and governed, not automatic. C (operate independently) undermines accountability and human oversight. D (multiple purposes) increases scope and risk rather than improving responsible use.
Currently there are no comments in this discussion, be the first to comment!