Within the last few years, deep learning models have seen increased usage in various computer vision domains and have attained high accuracy rates for several visual identification tasks. However, when faced with real data containing visible anomalies, their accuracy can be reduced significantly. In addition, adversarial examples, inputs with deliberate changes that induce misclassification, have further hindered their performance. The training methods used to minimize the effects of adversarial examples have also been found to negatively impact a model\u27s robustness on clean data as well as common noise and distortions, which may be an undesired trade-off. Adversarial robustness is a model\u27s ability to resist intentional deceptive inputs, while corruption robustness is its resilience to everyday noise and distortions. In our work, we adopt a technique that improves the neural network\u27s ability to generalize by training on augmented images of the dataset and evaluate its efficacy in the face of adversarial examples with several empirical metrics, comparing its performance with traditional adversarial training techniques
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.