Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

iSQI Exam CT-AI Topic 4 Question 9 Discussion

Actual exam question for iSQI's CT-AI exam
Question #: 9
Topic #: 4
[All CT-AI Questions]

A beer company is trying to understand how much recognition its logo has in the market. It plans to do that by monitoring images on various social media platforms using a pre-trained neural network for logo detection. This particular model has been trained by looking for words, as well as matching colors on social media images. The company logo has a big word across the middle with a bold blue and magenta border.

Which associated risk is most likely to occur when using this pre-trained model?

Show Suggested Answer Hide Answer
Suggested Answer: C

Flexibility in AI systems is crucial for various reasons, particularly because it allows for easier modification and adaptation of the system as a whole.

AI systems are inherently flexible (A): This statement is not correct. While some AI systems may be designed to be flexible, they are not inherently flexible by nature. Flexibility depends on the system's design and implementation.

AI systems require changing operational environments; therefore, flexibility is required (B): While it's true that AI systems may need to operate in changing environments, this statement does not directly address the importance of flexibility for the modification of the system.

Flexible AI systems allow for easier modification of the system as a whole (C): This statement correctly describes the importance of flexibility. Being able to modify AI systems easily is critical for their maintenance, adaptation to new requirements, and improvement.

Self-learning systems are expected to deal with new situations without explicitly having to program for it (D): This statement relates to the adaptability of self-learning systems rather than their overall flexibility for modification.

Hence, the correct answer is C. Flexible AI systems allow for easier modification of the system as a whole.


ISTQB CT-AI Syllabus Section 2.1 on Flexibility and Adaptability discusses the importance of flexibility in AI systems and how it enables easier modification and adaptability to new situations.

Sample Exam Questions document, Question #30 highlights the importance of flexibility in AI systems.

Contribute your Thoughts:

Rikki
24 days ago
I heard the model has a bit of a 'beer belly' - not sure it's the best fit for this job!
upvoted 0 times
Sylvie
2 days ago
B) Insufficient function; the model was not trained to check for colors or words
upvoted 0 times
...
Cherry
3 days ago
A) There is no risk, as the model has already been trained
upvoted 0 times
...
...
Callie
28 days ago
Maybe they should train the model on some 'beer goggles' to really understand their logo's recognition!
upvoted 0 times
Phyliss
3 days ago
B) Insufficient function; the model was not trained to check for colors or words
upvoted 0 times
...
Rasheeda
7 days ago
C) Improper data preparation
upvoted 0 times
...
Lashawna
19 days ago
A) There is no risk, as the model has already been trained
upvoted 0 times
...
...
Krissy
29 days ago
B) Insufficient function; the model was not trained to check for colors or words? That's a solid choice. This logo sounds pretty complex, so I doubt a generic model would do the trick.
upvoted 0 times
...
Raylene
1 months ago
A) There is no risk, as the model has already been trained? Really? I think I'll go with the 'inherited bias' option. You can never be too careful with pre-trained models!
upvoted 0 times
Isadora
3 days ago
But what if the model was trained properly and there are no biases?
upvoted 0 times
...
Santos
18 days ago
I agree, inherited bias is a real concern with pre-trained models.
upvoted 0 times
...
...
Lashawna
1 months ago
D) Inherited bias: the model could have inherited unknown defects. That's a good point - who knows what kind of biases the model has picked up during training?
upvoted 0 times
...
Cyndy
2 months ago
Hmm, I'm not sure if the model would work well for this task. Seems like it was trained to look for words and colors, but the logo has a pretty specific design. I hope they've tested it with their own logo!
upvoted 0 times
Cathern
27 days ago
B) Insufficient function; the model was not trained to check for colors or words
upvoted 0 times
...
Krystal
1 months ago
A) There is no risk, as the model has already been trained
upvoted 0 times
...
...
Dottie
2 months ago
But what about improper data preparation? That could also lead to issues.
upvoted 0 times
...
Gary
2 months ago
I agree with France. The model could have unknown defects.
upvoted 0 times
...
France
2 months ago
I think the biggest risk is inherited bias.
upvoted 0 times
...

Save Cancel