In general, models that perform their tasks:
Adversarial attacks are malicious attempts to fool or manipulate machine learning models by adding small perturbations to the input data that are imperceptible to humans but can cause significant changes in the model output. In general, models that perform their tasks more accurately are less robust against adversarial attacks, because they tend to have higher confidence in their predictions and are more sensitive to small changes in the input data. Reference: [Adversarial machine learning - Wikipedia], [Why Are Machine Learning Models Susceptible to Adversarial Attacks? | by Anirudh Jain | Towards Data Science]
Currently there are no comments in this discussion, be the first to comment!