4,838 research outputs found

    Efficient Defenses Against Adversarial Attacks

    Full text link
    Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of undermining a system. In the case of DNNs, the lack of better understanding of their working has prevented the development of efficient defenses. In this paper, we propose a new defense method based on practical observations which is easy to integrate into models and performs better than state-of-the-art defenses. Our proposed solution is meant to reinforce the structure of a DNN, making its prediction more stable and less likely to be fooled by adversarial samples. We conduct an extensive experimental study proving the efficiency of our method against multiple attacks, comparing it to numerous defenses, both in white-box and black-box setups. Additionally, the implementation of our method brings almost no overhead to the training procedure, while maintaining the prediction performance of the original model on clean samples.Comment: 16 page

    THE STUDY OF ACTIVATION FUNCTIONS IN DEEP LEARNING FOR PEDESTRIAN DETECTION AND TRACKING

    Get PDF
    Pedestrian detection and tracking remains a highlight research topic due to its paramount importance in the fields of video surveillance, human-machine interaction, and tracking analysis. At present time, pedestrian detection is still an open problem because of many challenges of image representation in the outdoor and indoor scenes. In recent years, deep learning, in particular Convolutional Neural Networks (CNNs) became the state-of-the-art in terms of accuracy in many computer vision tasks. The unsupervised learning of CNNs is still an open issue. In this paper, we study a matter of feature extraction using a special activation function. Most of CNNs share the same architecture, when each convolutional layer is followed by a nonlinear activation layer. The activation function Rectified Linear Unit (ReLU) is the most widely used as a fast alternative to sigmoid function. We propose a bounded randomized leaky ReLU working in such manner that the angle of linear part with the highest input values is tuned during learning stage, and this linear part can be directed not only upward but also downward using a variable bias for its starting point. The bounded randomized leaky ReLU was tested on Caltech Pedestrian Dataset with promising results
    • …
    corecore