New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

iSQI CT-AI Exam - Topic 4 Question 9 Discussion

Actual exam question for iSQI's CT-AI exam
Question #: 9
Topic #: 4
[All CT-AI Questions]

A beer company is trying to understand how much recognition its logo has in the market. It plans to do that by monitoring images on various social media platforms using a pre-trained neural network for logo detection. This particular model has been trained by looking for words, as well as matching colors on social media images. The company logo has a big word across the middle with a bold blue and magenta border.

Which associated risk is most likely to occur when using this pre-trained model?

Show Suggested Answer Hide Answer
Suggested Answer: C

Flexibility in AI systems is crucial for various reasons, particularly because it allows for easier modification and adaptation of the system as a whole.

AI systems are inherently flexible (A): This statement is not correct. While some AI systems may be designed to be flexible, they are not inherently flexible by nature. Flexibility depends on the system's design and implementation.

AI systems require changing operational environments; therefore, flexibility is required (B): While it's true that AI systems may need to operate in changing environments, this statement does not directly address the importance of flexibility for the modification of the system.

Flexible AI systems allow for easier modification of the system as a whole (C): This statement correctly describes the importance of flexibility. Being able to modify AI systems easily is critical for their maintenance, adaptation to new requirements, and improvement.

Self-learning systems are expected to deal with new situations without explicitly having to program for it (D): This statement relates to the adaptability of self-learning systems rather than their overall flexibility for modification.

Hence, the correct answer is C. Flexible AI systems allow for easier modification of the system as a whole.


ISTQB CT-AI Syllabus Section 2.1 on Flexibility and Adaptability discusses the importance of flexibility in AI systems and how it enables easier modification and adaptability to new situations.

Sample Exam Questions document, Question #30 highlights the importance of flexibility in AI systems.

Contribute your Thoughts:

0/2000 characters
Lenny
3 months ago
No risk? That's a bold claim!
upvoted 0 times
...
Reuben
3 months ago
I disagree, I think the real issue is improper data prep.
upvoted 0 times
...
Matthew
3 months ago
Wait, the model was trained on words? That seems off.
upvoted 0 times
...
Irma
4 months ago
I think option D makes the most sense here.
upvoted 0 times
...
Earnestine
4 months ago
Sounds like a risky move, especially with inherited bias!
upvoted 0 times
...
Emily
4 months ago
I vaguely recall a practice question about data preparation issues, so C could also be a possibility, but I’m not confident.
upvoted 0 times
...
Dulce
4 months ago
I feel like the biggest risk is probably D, inherited bias. We talked about that in class and how it can affect outcomes.
upvoted 0 times
...
Eladia
4 months ago
I’m not entirely sure, but I think B could be a risk too since the model might not be specifically tailored for logo detection.
upvoted 0 times
...
Rolande
5 months ago
I remember discussing how pre-trained models can still have biases from their training data, so D might be a concern.
upvoted 0 times
...
Thad
5 months ago
I'm a bit confused here. The question is asking about the most likely risk, but the options don't seem to clearly point to one. I'll need to really analyze the details to figure out the best answer.
upvoted 0 times
...
Adrianna
5 months ago
I feel pretty confident about this one. The model has already been trained, so there shouldn't be any major risks, right? As long as the data is prepared properly, it should work well.
upvoted 0 times
...
Ciara
5 months ago
Hmm, the model was trained to look for words and colors, so I'm guessing the issue might be that it's not sufficient for just detecting the logo itself. I'll have to consider whether the model is really up to the task.
upvoted 0 times
...
Matthew
5 months ago
This seems like a tricky one. I'll need to think carefully about the risks of using a pre-trained model, especially since it's looking for specific elements like words and colors.
upvoted 0 times
...
Ahmed
5 months ago
Ah, I see what's going on here. The variable 'a' is defined in the outer function 'foo', and the inner function 'bat' has access to it because of the way JavaScript handles scope. The answer is definitely 'C. Outer function's scope'.
upvoted 0 times
...
Skye
5 months ago
I'm a bit unsure about this one. I know internal policies and other elements are essential, but I can't remember if OFAC descriptions are mandatory.
upvoted 0 times
...
Dino
5 months ago
Okay, I've got this. The currency code, discount percentage, and minimum amount are the three pieces of information I need to enter to set up the invoice discount terms for the vendor in Sweden.
upvoted 0 times
...
Rikki
10 months ago
I heard the model has a bit of a 'beer belly' - not sure it's the best fit for this job!
upvoted 0 times
Dana
8 months ago
C) Improper data preparation
upvoted 0 times
...
Sylvie
9 months ago
B) Insufficient function; the model was not trained to check for colors or words
upvoted 0 times
...
Cherry
9 months ago
A) There is no risk, as the model has already been trained
upvoted 0 times
...
...
Callie
10 months ago
Maybe they should train the model on some 'beer goggles' to really understand their logo's recognition!
upvoted 0 times
Phyliss
9 months ago
B) Insufficient function; the model was not trained to check for colors or words
upvoted 0 times
...
Rasheeda
9 months ago
C) Improper data preparation
upvoted 0 times
...
Lashawna
9 months ago
A) There is no risk, as the model has already been trained
upvoted 0 times
...
...
Krissy
10 months ago
B) Insufficient function; the model was not trained to check for colors or words? That's a solid choice. This logo sounds pretty complex, so I doubt a generic model would do the trick.
upvoted 0 times
...
Raylene
10 months ago
A) There is no risk, as the model has already been trained? Really? I think I'll go with the 'inherited bias' option. You can never be too careful with pre-trained models!
upvoted 0 times
Alton
9 months ago
Even if it was trained well, there could still be some unknown defects that could cause issues.
upvoted 0 times
...
Isadora
9 months ago
But what if the model was trained properly and there are no biases?
upvoted 0 times
...
Santos
9 months ago
I agree, inherited bias is a real concern with pre-trained models.
upvoted 0 times
...
...
Lashawna
10 months ago
D) Inherited bias: the model could have inherited unknown defects. That's a good point - who knows what kind of biases the model has picked up during training?
upvoted 0 times
...
Cyndy
10 months ago
Hmm, I'm not sure if the model would work well for this task. Seems like it was trained to look for words and colors, but the logo has a pretty specific design. I hope they've tested it with their own logo!
upvoted 0 times
Cathern
10 months ago
B) Insufficient function; the model was not trained to check for colors or words
upvoted 0 times
...
Krystal
10 months ago
A) There is no risk, as the model has already been trained
upvoted 0 times
...
...
Dottie
11 months ago
But what about improper data preparation? That could also lead to issues.
upvoted 0 times
...
Gary
11 months ago
I agree with France. The model could have unknown defects.
upvoted 0 times
...
France
11 months ago
I think the biggest risk is inherited bias.
upvoted 0 times
...

Save Cancel