Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam AI-900 Topic 1 Question 87 Discussion

Actual exam question for Microsoft's AI-900 exam
Question #: 87
Topic #: 1
[All AI-900 Questions]

You have an Al-based loan approval system.

During testing, you discover that the system has a gender bias.

Which responsible Al principle does this violate?

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

Van
29 days ago
Fairness, definitely. This is like the AI equivalent of a bank refusing to give loans to women. Unacceptable!
upvoted 0 times
Callie
19 days ago
A) accountability
upvoted 0 times
...
...
Rozella
1 months ago
But shouldn't reliability and safety also be considered since biased decisions can lead to unsafe outcomes?
upvoted 0 times
...
Rebbeca
1 months ago
Wow, a gender-biased loan approval system? Sounds like a real 'bro-grammer' kind of issue. Where's the 'lady-coder' when you need her, am I right?
upvoted 0 times
Tequila
26 days ago
A) accountability
upvoted 0 times
...
...
Serina
1 months ago
I'm not surprised by the gender bias. This is why we need more diversity in the teams developing these AI systems. Otherwise, they'll just reflect the biases of the people who create them.
upvoted 0 times
Leatha
1 days ago
A) accountability
upvoted 0 times
...
...
Armanda
1 months ago
I agree with Mozell, the system having a gender bias goes against the principle of fairness.
upvoted 0 times
...
Jerrod
1 months ago
Clearly, the correct answer is C) fairness. An AI system with gender bias violates the fundamental principle of fairness in responsible AI.
upvoted 0 times
...
Mozell
2 months ago
I think the responsible AI principle violated here is fairness.
upvoted 0 times
...

Save Cancel