161 research outputs found

    Robustness of Adversarial Attacks in Sound Event Classification

    Get PDF
    An adversarial attack is a method to generate perturbations to the input of a machine learning model in order to make the output of the model incorrect. The perturbed inputs are known as adversarial examples. In this paper, we investigate the robustness of adversarial examples to simple input transformations such as mp3 compression, resampling, white noise and reverb in the task of sound event classification. By performing this analysis, we aim to provide insight on strengths and weaknesses in current adversarial attack algorithms as well as provide a baseline for defenses against adversarial attacks. Our work shows that adversarial attacks are not robust to simple input transformations. White noise is the most consistent method to defend against adversarial attacks with a success rate of 73.72%73.72\% averaged across all models and attack algorithms.23924

    Infra-red Pupil Detection for Use in a Face Recognition System

    Get PDF
    This paper presents a new method of eye localisation and face segmentation for use in a face recognition system. By using two near infrared light sources, we have shown that the face can be coarsely segmented, and the eyes can be accurately located, increasing the accuracy of the face localisation and improving the overall speed of the system. The system is able to locate both eyes within 25% of the eye-to-eye distance in over 96% of test cases

    Anomalous behaviour in loss-gradient based interpretability methods

    Full text link
    Loss-gradients are used to interpret the decision making process of deep learning models. In this work, we evaluate loss-gradient based attribution methods by occluding parts of the input and comparing the performance of the occluded input to the original input. We observe that the occluded input has better performance than the original across the test dataset under certain conditions. Similar behaviour is observed in sound and image recognition tasks. We explore different loss-gradient attribution methods, occlusion levels and replacement values to explain the phenomenon of performance improvement under occlusion.Comment: Accepted at ICLR RobustML workshop 202

    Bader’s theory of atoms in molecules (AIM) and its applications to chemical bonding

    Get PDF
    In this perspective article, the basic theory and applications of the "Quantum Theory of Atoms in Molecules" have been presented with examples from different categories of weak and hydrogen bonded molecular systems

    Curriculum based dropout discriminator for domain adaptation

    Get PDF
    Domain adaptation is essential to enable wide usage of deep learning based networks trained using large labeled datasets. Adversarial learning based techniques have shown their utility towards solving this problem using a discriminator that ensures source and target distributions are close. However, here we suggest that rather than using a point estimate, it would be useful if a distribution based discriminator could be used to bridge this gap. This could be achieved using multiple classifiers or using traditional ensemble methods. In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator could suffice to obtain the distribution based discriminator. Specifically, we propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution and the corresponding reverse gradients are used to align the source and target feature representations. The detailed results and thorough ablation analysis show that our model outperforms state-of-the-art results.Comment: BMVC 2019 Accepted, Project Page: https://delta-lab-iitk.github.io/CD3A
    • …
    corecore