Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

iSQI Exam CT-AI Topic 9 Question 22 Discussion

Actual exam question for iSQI's CT-AI exam
Question #: 22
Topic #: 9
[All CT-AI Questions]

A wildlife conservation group would like to use a neural network to classify images of different animals. The algorithm is going to be used on a social media platform to automatically pick out pictures of the chosen animal of the month. This month's animal is set to be a wolf. The test team has already observed that the algorithm could classify a picture of a dog as being a wolf because of the similar characteristics between dogs and wolves. To handle such instances, the team is planning to train the model with additional images of wolves and dogs so that the model is able to better differentiate between the two.

What test method should you use to verify that the model has improved after the additional training?

Show Suggested Answer Hide Answer
Suggested Answer: D

Back-to-back testing is used to compare two different versions of an ML model, which is precisely what is needed in this scenario.

The model initially misclassified dogs as wolves due to feature similarities.

The test team retrains the model with additional images of dogs and wolves.

The best way to verify whether this additional training improved classification accuracy is to compare the original model's output with the newly trained model's output using the same test dataset.

Why Other Options Are Incorrect:

A (Metamorphic Testing): Metamorphic testing is useful for generating new test cases based on existing ones but does not directly compare different model versions.

B (Adversarial Testing): Adversarial testing is used to check how robust a model is against maliciously perturbed inputs, not to verify training effectiveness.

C (Pairwise Testing): Pairwise testing is a combinatorial technique for reducing the number of test cases by focusing on key variable interactions, not for validating model improvements.

Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:

ISTQB CT-AI Syllabus (Section 9.3: Back-to-Back Testing)

'Back-to-back testing is used when an updated ML model needs to be compared against a previous version to confirm that it performs better or as expected'.

'The results of the newly trained model are compared with those of the prior version to ensure that changes did not negatively impact performance'.

Conclusion:

To verify that the model's performance improved after retraining, back-to-back testing is the most appropriate method as it compares both model versions. Hence, the correct answer is D.


Contribute your Thoughts:

Glory
1 days ago
That's a good point, Shenika. Maybe we should consider both back-to-back testing and adversarial testing for a more thorough verification.
upvoted 0 times
...
Shenika
2 days ago
But wouldn't adversarial testing also be important to make sure no incorrect images were used in the training?
upvoted 0 times
...
Delbert
2 days ago
Haha, I can just imagine the team trying to train the model to not confuse wolves and dogs. It's like teaching a toddler the difference between a lion and a house cat.
upvoted 0 times
...
Malinda
4 days ago
I agree with Glory, comparing the model before and after training is the best way to see if it has improved.
upvoted 0 times
...
Glory
8 days ago
I think we should use back-to-back testing to verify the model's improvement.
upvoted 0 times
...
Victor
10 days ago
I agree with Eugene. Back-to-back testing is the most straightforward approach to verify the improvement in the model's ability to differentiate between wolves and dogs.
upvoted 0 times
...
Eugene
14 days ago
Option D seems like the way to go. Back-to-back testing will let you clearly see the impact of the additional training on the model's performance.
upvoted 0 times
Malcom
2 minutes ago
I think we should use option D for testing.
upvoted 0 times
...
...

Save Cancel