Your company is developing an AI-powered customer support agent. You need to ensure that the solution follows Microsoft responsible AI principles. Which two actions should you perform? Select the two BEST answers. Each correct answer presents part of the solution.
To align an AI customer support agent with Microsoft's Responsible AI principles, two high-impact actions are fairness/inclusiveness validation and transparency to users. B is correct because testing for inclusive and culturally sensitive responses directly supports fairness and helps reduce harm. In practice, you evaluate responses across diverse user personas, languages/dialects, accessibility scenarios, and sensitive contexts. You look for biased assumptions, stereotyping, exclusionary language, and disparate quality of service. This also implies ongoing monitoring because model behavior can drift as prompts, knowledge sources, and user inputs evolve.
E is correct because a clear disclaimer supports transparency: customers should know they are interacting with an AI system, understand the type of assistance it can provide, and know what to do if the response is incorrect or they need a human. A disclosure is also a practical risk-control that reduces overreliance and sets expectations about limitations.
The other options are not best for Responsible AI alignment: A (retain all conversations) can conflict with privacy/data minimization; retention must be justified and governed, not automatic. C (operate independently) undermines accountability and human oversight. D (multiple purposes) increases scope and risk rather than improving responsible use.
In which scenario is Azure Machine Learning most likely to deliver strategic value for an organization?
Azure Machine Learning delivers the most strategic value when an organization needs to build, train, evaluate, and operationalize predictive models that improve decisions at scale. Option A is a classic predictive analytics use case: forecasting demand using historical sales across product categories. This typically involves time-series forecasting, feature engineering (seasonality, promotions, macro signals), model training/validation, deployment, and continuous monitoring---exactly the lifecycle Azure Machine Learning is designed to support (ML pipelines, model management, deployment endpoints, and MLOps). Forecasting demand can materially improve inventory optimization, supply chain planning, and revenue outcomes, which is why it's strategic.
B (digitizing paper processes) is more aligned to workflow automation and document processing (often Document Intelligence + Power Automate), not primarily Azure ML. C is sentiment analysis, which can be solved with prebuilt language services and doesn't necessarily require custom ML training unless you need a highly specialized classifier. D (location-based personalization) is commonly rules-based or CRM/marketing automation; it may use AI, but it doesn't inherently require building a custom ML model---unless you're doing advanced propensity modeling.
Your company uses a non-reasoning generative AI model to create textual content. You discover that the model's responses are inconsistent and do NOT meet expectations. You need to improve the prompts. What should you do? More than one answer choice may achieve the goal. Select the BEST answer.
When a non-reasoning generative AI model produces inconsistent outputs, the most reliable improvement is to make the prompt more specific, constrained, and demonstrative of what ''good'' looks like.
A is correct because adding high-quality examples is a form of few-shot prompting. Examples act like ''training wheels'' at inference time: they show the model the desired structure, tone, level of detail, formatting rules, and boundaries. This reduces ambiguity and variance, especially for tasks like marketing copy, summaries, policy text, or customer replies. The more your examples resemble real target outputs (including edge cases), the more consistent the model's completions become.
B is correct because adding context, relevant source material, and explicit expectations narrows the model's degrees of freedom. Including the intended audience, purpose, constraints (length, voice, banned claims), and trusted reference content (approved facts, product specs, policy excerpts) helps the model stay aligned and reduces hallucinations and off-brand language. This is also where you specify acceptance criteria such as ''must include 3 bullet points,'' ''use UK English,'' or ''cite only provided text.''
C is not best: technical jargon can confuse or bias output if it's not aligned to the task; clarity beats jargon. D is not best: a single concise requirement is usually under-specified and often increases variability.
Currently there are no comments in this discussion, be the first to comment!