A wildlife conservation group would like to use a neural network to classify images of different animals. The algorithm is going to be used on a social media platform to automatically pick out pictures of the chosen animal of the month. This month's animal is set to be a wolf. The test team has already observed that the algorithm could classify a picture of a dog as being a wolf because of the similar characteristics between dogs and wolves. To handle such instances, the team is planning to train the model with additional images of wolves and dogs so that the model is able to better differentiate between the two.
What test method should you use to verify that the model has improved after the additional training?
Back-to-back testing is used to compare two different versions of an ML model, which is precisely what is needed in this scenario.
The model initially misclassified dogs as wolves due to feature similarities.
The test team retrains the model with additional images of dogs and wolves.
The best way to verify whether this additional training improved classification accuracy is to compare the original model's output with the newly trained model's output using the same test dataset.
Why Other Options Are Incorrect:
A (Metamorphic Testing): Metamorphic testing is useful for generating new test cases based on existing ones but does not directly compare different model versions.
B (Adversarial Testing): Adversarial testing is used to check how robust a model is against maliciously perturbed inputs, not to verify training effectiveness.
C (Pairwise Testing): Pairwise testing is a combinatorial technique for reducing the number of test cases by focusing on key variable interactions, not for validating model improvements.
Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:
ISTQB CT-AI Syllabus (Section 9.3: Back-to-Back Testing)
'Back-to-back testing is used when an updated ML model needs to be compared against a previous version to confirm that it performs better or as expected'.
'The results of the newly trained model are compared with those of the prior version to ensure that changes did not negatively impact performance'.
Conclusion:
To verify that the model's performance improved after retraining, back-to-back testing is the most appropriate method as it compares both model versions. Hence, the correct answer is D.
Glory
1 days agoShenika
2 days agoDelbert
2 days agoMalinda
4 days agoGlory
8 days agoVictor
10 days agoEugene
14 days agoMalcom
2 minutes ago