New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon AIF-C01 Exam - Topic 1 Question 10 Discussion

Actual exam question for Amazon's AIF-C01 exam
Question #: 10
Topic #: 1
[All AIF-C01 Questions]

A company has installed a security camer

a. The company uses an ML model to evaluate the security camera footage for potential thefts. The company has discovered that the model disproportionately flags people who are members of a specific ethnic group.

Which type of bias is affecting the model output?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

0/2000 characters
Billy
3 months ago
I’m surprised it’s happening at all in 2023!
upvoted 0 times
...
Eden
3 months ago
This is a classic case of measurement bias.
upvoted 0 times
...
Arthur
3 months ago
Wait, are we sure it’s not observer bias?
upvoted 0 times
...
Alfred
4 months ago
Totally agree, it’s definitely unfair!
upvoted 0 times
...
Marica
4 months ago
Sounds like sampling bias to me.
upvoted 0 times
...
Malcom
4 months ago
I was leaning towards confirmation bias, but that seems more about how people interpret data rather than how the model flags individuals.
upvoted 0 times
...
Graciela
4 months ago
This reminds me of a practice question where we discussed observer bias. But in this case, it feels more like the model is biased in its training data.
upvoted 0 times
...
Thaddeus
4 months ago
I'm not entirely sure, but I remember something about measurement bias affecting how data is collected. Could that be it?
upvoted 0 times
...
Lisandra
5 months ago
I think this might be sampling bias since the model seems to be trained on data that doesn't represent the whole population fairly.
upvoted 0 times
...
Brittney
5 months ago
I'm leaning towards saying this is a case of observer bias. The way the security camera footage is being evaluated by the model seems to be introducing a bias based on the ethnic backgrounds of the people in the footage.
upvoted 0 times
...
Elza
5 months ago
This has to be either sampling bias or measurement bias. The model is clearly not accurately representing the real-world data, so it's an issue with the data or how the model was trained.
upvoted 0 times
...
Jamal
5 months ago
Okay, I think this is a case of sampling bias. The training data for the model likely didn't represent the full diversity of the population, leading to this disproportionate flagging of a specific ethnic group.
upvoted 0 times
...
Eloisa
5 months ago
This seems like a clear case of algorithmic bias. The model is disproportionately flagging a specific ethnic group, which suggests the training data or model design has some inherent bias.
upvoted 0 times
...
Margo
5 months ago
Hmm, I'm not sure which type of bias this is exactly. I'll need to think through the different types of bias and how they might apply here. Let me review my notes on that.
upvoted 0 times
...
Fatima
5 months ago
This seems like a straightforward question about the advantages of using a job evaluation committee. I'll need to carefully consider each option and think about which one best captures the key benefit.
upvoted 0 times
...
Truman
1 year ago
This is why we can't just blindly trust AI models without thoroughly auditing them. Confirmation bias could also be a factor if the developers weren't actively looking for these kinds of issues.
upvoted 0 times
Sanjuana
1 year ago
C: It's important to constantly monitor and audit AI models to prevent such biases from causing harm.
upvoted 0 times
...
Harley
1 year ago
B: Yeah, the company should have ensured a more diverse dataset to avoid this issue.
upvoted 0 times
...
Socorro
1 year ago
A: The bias affecting the model output is sampling bias.
upvoted 0 times
...
...
Tiera
1 year ago
I'm going to have to go with option A, measurement bias. The way the data is being collected and processed by the model is clearly flawed, leading to these disproportionate results.
upvoted 0 times
...
Michel
1 year ago
Agreed, sampling bias seems like the most likely culprit here. The model is making inferences based on a skewed dataset, which is never a good idea, especially when it comes to sensitive topics like this.
upvoted 0 times
Sharika
1 year ago
A: Absolutely, bias in AI can have serious consequences, especially when it comes to something as important as security.
upvoted 0 times
...
Leila
1 year ago
B: Yeah, I agree. It's important to have a diverse and representative dataset to avoid these kinds of issues.
upvoted 0 times
...
Dannie
1 year ago
A: I think it's definitely sampling bias. The model is learning from a dataset that doesn't accurately represent the population.
upvoted 0 times
...
...
Cruz
1 year ago
I believe it could also be Observer bias, where the people evaluating the footage have preconceived notions about the specific ethnic group.
upvoted 0 times
...
Cammy
1 year ago
I agree with Theodora. The model is probably trained on a dataset that is not representative of the entire population.
upvoted 0 times
...
Lera
1 year ago
Hmm, this sounds like a classic case of algorithmic bias. I bet it's option B, sampling bias. The model was probably trained on a dataset that didn't accurately represent the entire population.
upvoted 0 times
Luann
1 year ago
Exactly, that's why it's important to have diverse and representative datasets for training AI models.
upvoted 0 times
...
Ilona
1 year ago
So, the model is just reflecting the biases present in the data it was trained on.
upvoted 0 times
...
Marcelle
1 year ago
Yeah, that makes sense. The dataset used to train the model must not have been diverse enough.
upvoted 0 times
...
Jacquline
1 year ago
I think you're right, it's probably sampling bias.
upvoted 0 times
...
...
Theodora
1 year ago
I think the bias affecting the model output is Sampling bias.
upvoted 0 times
...

Save Cancel