350 research outputs found

    Synthesizing Adversarial Examples for Neural Networks

    Get PDF
    As machine learning is being integrated into more and more systems, such as autonomous vehicles or medical devices, they are also becoming entry points for attacks. Many sate-of-the-art neural networks have been proved, to be vulnerable to adversarial examples. These failures of machine learning models demonstrate that even simple algorithms can behave very differently from what their designers intend to. In order to close this gap between what designers intend to and how algorithms behave, there is a huge need for preventing adversarial examples to improve the credibility of the model. This study focuses on synthesizing adversarial examples using two different white box attacks - Fast Gradient Sign Method (FGSM) and Expectation Over Transfromation (EOT) Method
    • …
    corecore