Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam AIF-C01 Topic 1 Question 10 Discussion

Actual exam question for Amazon's AIF-C01 exam
Question #: 10
Topic #: 1
[All AIF-C01 Questions]

A company has installed a security camer

a. The company uses an ML model to evaluate the security camera footage for potential thefts. The company has discovered that the model disproportionately flags people who are members of a specific ethnic group.

Which type of bias is affecting the model output?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

Truman
2 months ago
This is why we can't just blindly trust AI models without thoroughly auditing them. Confirmation bias could also be a factor if the developers weren't actively looking for these kinds of issues.
upvoted 0 times
Sanjuana
1 months ago
C: It's important to constantly monitor and audit AI models to prevent such biases from causing harm.
upvoted 0 times
...
Harley
1 months ago
B: Yeah, the company should have ensured a more diverse dataset to avoid this issue.
upvoted 0 times
...
Socorro
1 months ago
A: The bias affecting the model output is sampling bias.
upvoted 0 times
...
...
Tiera
2 months ago
I'm going to have to go with option A, measurement bias. The way the data is being collected and processed by the model is clearly flawed, leading to these disproportionate results.
upvoted 0 times
...
Michel
2 months ago
Agreed, sampling bias seems like the most likely culprit here. The model is making inferences based on a skewed dataset, which is never a good idea, especially when it comes to sensitive topics like this.
upvoted 0 times
Sharika
1 months ago
A: Absolutely, bias in AI can have serious consequences, especially when it comes to something as important as security.
upvoted 0 times
...
Leila
2 months ago
B: Yeah, I agree. It's important to have a diverse and representative dataset to avoid these kinds of issues.
upvoted 0 times
...
Dannie
2 months ago
A: I think it's definitely sampling bias. The model is learning from a dataset that doesn't accurately represent the population.
upvoted 0 times
...
...
Cruz
2 months ago
I believe it could also be Observer bias, where the people evaluating the footage have preconceived notions about the specific ethnic group.
upvoted 0 times
...
Cammy
3 months ago
I agree with Theodora. The model is probably trained on a dataset that is not representative of the entire population.
upvoted 0 times
...
Lera
3 months ago
Hmm, this sounds like a classic case of algorithmic bias. I bet it's option B, sampling bias. The model was probably trained on a dataset that didn't accurately represent the entire population.
upvoted 0 times
Luann
2 months ago
Exactly, that's why it's important to have diverse and representative datasets for training AI models.
upvoted 0 times
...
Ilona
2 months ago
So, the model is just reflecting the biases present in the data it was trained on.
upvoted 0 times
...
Marcelle
2 months ago
Yeah, that makes sense. The dataset used to train the model must not have been diverse enough.
upvoted 0 times
...
Jacquline
2 months ago
I think you're right, it's probably sampling bias.
upvoted 0 times
...
...
Theodora
3 months ago
I think the bias affecting the model output is Sampling bias.
upvoted 0 times
...

Save Cancel