New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

iSQI CT-AI Exam - Topic 9 Question 22 Discussion

Actual exam question for iSQI's CT-AI exam
Question #: 22
Topic #: 9
[All CT-AI Questions]

A wildlife conservation group would like to use a neural network to classify images of different animals. The algorithm is going to be used on a social media platform to automatically pick out pictures of the chosen animal of the month. This month's animal is set to be a wolf. The test team has already observed that the algorithm could classify a picture of a dog as being a wolf because of the similar characteristics between dogs and wolves. To handle such instances, the team is planning to train the model with additional images of wolves and dogs so that the model is able to better differentiate between the two.

What test method should you use to verify that the model has improved after the additional training?

Show Suggested Answer Hide Answer
Suggested Answer: D

Back-to-back testing is used to compare two different versions of an ML model, which is precisely what is needed in this scenario.

The model initially misclassified dogs as wolves due to feature similarities.

The test team retrains the model with additional images of dogs and wolves.

The best way to verify whether this additional training improved classification accuracy is to compare the original model's output with the newly trained model's output using the same test dataset.

Why Other Options Are Incorrect:

A (Metamorphic Testing): Metamorphic testing is useful for generating new test cases based on existing ones but does not directly compare different model versions.

B (Adversarial Testing): Adversarial testing is used to check how robust a model is against maliciously perturbed inputs, not to verify training effectiveness.

C (Pairwise Testing): Pairwise testing is a combinatorial technique for reducing the number of test cases by focusing on key variable interactions, not for validating model improvements.

Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:

ISTQB CT-AI Syllabus (Section 9.3: Back-to-Back Testing)

'Back-to-back testing is used when an updated ML model needs to be compared against a previous version to confirm that it performs better or as expected'.

'The results of the newly trained model are compared with those of the prior version to ensure that changes did not negatively impact performance'.

Conclusion:

To verify that the model's performance improved after retraining, back-to-back testing is the most appropriate method as it compares both model versions. Hence, the correct answer is D.


Contribute your Thoughts:

0/2000 characters
Cecil
2 months ago
Metamorphic testing might not be the best fit for this situation.
upvoted 0 times
...
Princess
2 months ago
I think back-to-back testing is the way to go here.
upvoted 0 times
...
Markus
2 months ago
Sounds like a solid plan to train with more images!
upvoted 0 times
...
Sunshine
3 months ago
Wait, can a dog really be mistaken for a wolf? That's surprising!
upvoted 0 times
...
Alpha
3 months ago
I disagree, adversarial testing could help catch those mix-ups too!
upvoted 0 times
...
Dana
3 months ago
Metamorphic testing might be useful, but I feel like it’s more for cases where the output is less predictable. Not sure if it fits here.
upvoted 0 times
...
Mayra
3 months ago
Pairwise testing sounds interesting, but I wonder if it’s really necessary for just distinguishing between wolves and dogs.
upvoted 0 times
...
Xochitl
4 months ago
I'm not entirely sure, but I think adversarial testing could help ensure the model isn't misclassifying images, especially with the dog-wolf confusion.
upvoted 0 times
...
Elsa
4 months ago
I remember we discussed back-to-back testing in class, and it seems like a solid way to compare the old and new models directly.
upvoted 0 times
...
Cordelia
4 months ago
I think the best approach here is definitely back-to-back testing. It's a straightforward way to measure the impact of the additional training and ensure the model is performing better at distinguishing wolves from dogs. The other options seem a bit overkill for this particular scenario.
upvoted 0 times
...
Sharen
4 months ago
Ooh, this is a tricky one. I'm kind of leaning towards metamorphic testing since the domain seems a bit unclear and we might need to explore some more complex relationships between the images. But I could also see the value in adversarial testing to really stress test the model.
upvoted 0 times
...
Regenia
4 months ago
I feel pretty confident that the right answer here is D - back-to-back testing. That way we can see a clear before and after comparison and verify that the additional training has actually improved the model's performance.
upvoted 0 times
...
Lashawnda
5 months ago
Hmm, I'm a bit confused on this one. The question mentions that the team is trying to handle instances where the model classifies a dog as a wolf, so I'm wondering if adversarial testing might be a better option to really put the model through its paces and ensure it can differentiate between the two.
upvoted 0 times
...
Penney
5 months ago
This seems like a tricky one. I'm not totally sure what the best approach would be, but I'm thinking back-to-back testing might be a good way to go since we can directly compare the performance of the model before and after the additional training.
upvoted 0 times
...
Jodi
10 months ago
Option B, adversarial testing, might be overkill here. Unless they suspect the training data is somehow corrupted, I think back-to-back testing is the way to go.
upvoted 0 times
Kris
10 months ago
Yeah, I think it's important to directly compare the model before and after the additional training to see if there's any improvement.
upvoted 0 times
...
Dana
10 months ago
I agree, back-to-back testing seems like the most practical approach in this case.
upvoted 0 times
...
...
Glory
11 months ago
That's a good point, Shenika. Maybe we should consider both back-to-back testing and adversarial testing for a more thorough verification.
upvoted 0 times
...
Shenika
11 months ago
But wouldn't adversarial testing also be important to make sure no incorrect images were used in the training?
upvoted 0 times
...
Delbert
11 months ago
Haha, I can just imagine the team trying to train the model to not confuse wolves and dogs. It's like teaching a toddler the difference between a lion and a house cat.
upvoted 0 times
Keneth
10 months ago
A: Exactly, it's the best way to verify the improvement.
upvoted 0 times
...
Alpha
10 months ago
B: Yeah, that way we can compare the model before and after the additional training.
upvoted 0 times
...
Omer
11 months ago
A: We should use back-to-back testing to see if the model has improved.
upvoted 0 times
...
...
Malinda
11 months ago
I agree with Glory, comparing the model before and after training is the best way to see if it has improved.
upvoted 0 times
...
Glory
11 months ago
I think we should use back-to-back testing to verify the model's improvement.
upvoted 0 times
...
Victor
11 months ago
I agree with Eugene. Back-to-back testing is the most straightforward approach to verify the improvement in the model's ability to differentiate between wolves and dogs.
upvoted 0 times
...
Eugene
11 months ago
Option D seems like the way to go. Back-to-back testing will let you clearly see the impact of the additional training on the model's performance.
upvoted 0 times
Trina
10 months ago
Let's go with that method then.
upvoted 0 times
...
Winfred
10 months ago
Agreed, back-to-back testing will show us if the additional training made a difference.
upvoted 0 times
...
Malcom
11 months ago
I think we should use option D for testing.
upvoted 0 times
...
...

Save Cancel