New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

iSQI CT-AI Exam - Topic 2 Question 13 Discussion

Actual exam question for iSQI's CT-AI exam
Question #: 13
Topic #: 2
[All CT-AI Questions]

An image classification system is being trained for classifying faces of humans. The distribution of the data is 70% ethnicity A and 30% for ethnicities B, C and D. Based ONLY on the above information, which of the following options BEST describes the situation of this image classification system?

SELECT ONE OPTION

Show Suggested Answer Hide Answer
Suggested Answer: A

When written requirements are given in text documents, the best way to generate test cases is by using Natural Language Processing (NLP). Here's why:

Natural Language Processing (NLP): NLP can analyze and understand human language. It can be used to process textual requirements to extract relevant information and generate test cases. This method is efficient in handling large volumes of textual data and identifying key elements necessary for testing.

Why Not Other Options:

Analyzing source code for generating test cases: This is more suitable for white-box testing where the code is available, but it doesn't apply to text-based requirements.

Machine learning on logs of execution: This approach is used for dynamic analysis based on system behavior during execution rather than static textual requirements.

GUI analysis by computer vision: This is used for testing graphical user interfaces and is not applicable to text-based requirements.


Contribute your Thoughts:

0/2000 characters
Kayleigh
3 months ago
70% for one ethnicity is a clear sign of bias.
upvoted 0 times
...
Hector
3 months ago
Not sure if it's just sample bias, could be more complex.
upvoted 0 times
...
Amie
3 months ago
Wait, isn't it also a bit of algorithmic bias?
upvoted 0 times
...
Cyril
4 months ago
I agree, the distribution is skewed.
upvoted 0 times
...
Ardella
4 months ago
This is definitely sample bias.
upvoted 0 times
...
Kathrine
4 months ago
I thought algorithmic bias was more about how the algorithm processes data, not just the data distribution itself. I'm confused!
upvoted 0 times
...
Donte
4 months ago
This question feels familiar; I think we had a similar practice question about data distribution affecting model performance.
upvoted 0 times
...
Rashida
4 months ago
I'm not entirely sure, but I think expert system bias relates more to the decision-making process rather than data distribution.
upvoted 0 times
...
Lashawn
5 months ago
I remember we discussed sample bias in class, especially when the training data is skewed like this. It seems like the right choice.
upvoted 0 times
...
Ora
5 months ago
Okay, I think I've got it. Since the data is heavily biased towards one ethnicity, the model is likely to be biased towards that ethnicity as well. This sounds like a clear case of sample bias to me.
upvoted 0 times
...
Nelida
5 months ago
Hmm, I'm not sure. The question mentions the distribution of the data, but doesn't give any other details about the model or the training process. I'll have to weigh the options carefully before selecting an answer.
upvoted 0 times
...
Renay
5 months ago
I'm a bit confused here. Is this really sample bias, or could it be something else like algorithmic bias? I'll need to think this through carefully.
upvoted 0 times
...
Yuonne
5 months ago
This looks like a classic case of sample bias. The data is heavily skewed towards one ethnicity, which could lead to the model performing poorly on the other ethnicities.
upvoted 0 times
...
Merrilee
5 months ago
Okay, let's see. Policy label and policy signature seem like the two attributes we can fetch using an API. I'm pretty confident about those two choices.
upvoted 0 times
...
Janae
9 months ago
I'm going with option B. This is a textbook case of sample bias. The model is going to be heavily biased towards ethnicity A, and could even completely fail to recognize the other ethnicities.
upvoted 0 times
...
Stephen
9 months ago
Haha, I bet the model will be great at recognizing Smurf faces, but struggle with the rest of us. Classic algorithmic bias!
upvoted 0 times
Jamal
8 months ago
D) This is an example of algorithmic bias.
upvoted 0 times
...
Lon
8 months ago
C) This is an example of hyperparameter bias.
upvoted 0 times
...
Augustine
9 months ago
B) This is an example of sample bias.
upvoted 0 times
...
Mila
9 months ago
A) This is an example of expert system bias.
upvoted 0 times
...
...
Daniel
9 months ago
I disagree, this looks more like an issue of expert system bias. The developers of the system have likely built in certain assumptions about the prevalence of different ethnicities, leading to this imbalanced dataset.
upvoted 0 times
Rosita
8 months ago
C) I agree, it seems like an example of hyperparameter bias.
upvoted 0 times
...
Darrin
8 months ago
B) I think it's more of a sample bias issue.
upvoted 0 times
...
Ilda
8 months ago
A) This is an example of expert system bias.
upvoted 0 times
...
...
Gayla
10 months ago
I'm not sure, but I think it could also be D) This is an example of algorithmic bias. The system may not be trained properly to recognize faces from all ethnicities.
upvoted 0 times
...
Dona
11 months ago
I agree with Thaddeus. The data distribution is not representative of the actual population, so it's definitely sample bias.
upvoted 0 times
...
Gaston
11 months ago
This is definitely an example of sample bias. The training data is heavily skewed towards ethnicity A, which could lead to the model performing poorly on the less represented ethnicities.
upvoted 0 times
Nichelle
9 months ago
Definitely sample bias. The training data needs to be more diverse for better results.
upvoted 0 times
...
Leatha
10 months ago
I agree, sample bias is a big issue here. The model needs more balanced data.
upvoted 0 times
...
Carmen
10 months ago
Yeah, sample bias makes sense. The model might not generalize well to other groups.
upvoted 0 times
...
Erick
10 months ago
I think it's sample bias too. The data is not representative of all ethnicities.
upvoted 0 times
...
...
Thaddeus
11 months ago
I think the answer is B) This is an example of sample bias.
upvoted 0 times
...

Save Cancel