100 research outputs found

    Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks

    Get PDF
    Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by L∞-norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network’s latent representations, (4) an analysis of networks’ decision boundaries and (5) the use of equivalence of L2 and L∞ perturbation norm theories

    Using Proximity Graph Cut for Fast and Robust Instance-Based Classification in Large Datasets

    Get PDF
    K-nearest neighbours (kNN) is a very popular instance-based classifier due to its simplicity and good empirical performance. However, large-scale datasets are a big problem for building fast and compact neighbourhood-based classifiers. This work presents the design and implementation of a classification algorithm with index data structures, which would allow us to build fast and scalable solutions for large multidimensional datasets. We propose a novel approach that uses navigable small-world (NSW) proximity graph representation of large-scale datasets. Our approach shows 2-4 times classification speedup for both average and 99th percentile time with asymptotically close classification accuracy compared to the 1-NN method. We observe two orders of magnitude better classification time in cases when method uses swap memory. We show that NSW graph used in our method outperforms other proximity graphs in classification accuracy. Our results suggest that the algorithm can be used in large-scale applications for fast and robust classification, especially when the search index is already constructed for the data

    Are Microphone Signals Alone Sufficient for Joint Microphones and Sources Localization?

    Full text link
    Joint microphones and sources localization can be achieved by using both time of arrival (TOA) and time difference of arrival (TDOA) measurements, even in scenarios where both microphones and sources are asynchronous due to unknown emission time of human voices or sources and unknown recording start time of independent microphones. However, TOA measurements require both microphone signals and the waveform of source signals while TDOA measurements can be obtained using microphone signals alone. In this letter, we explore the sufficiency of using only microphone signals for joint microphones and sources localization by presenting two mapping functions for both TOA and TDOA formulas. Our proposed mapping functions demonstrate that the transformations of TOA and TDOA formulas can be the same, indicating that microphone signals alone are sufficient for joint microphones and sources localization without knowledge of the waveform of source signals. We have validated our proposed mapping functions through both mathematical proof and experimental results.Comment: 2 figure

    Low Rank Properties for Estimating Microphones Start Time and Sources Emission Time

    Full text link
    The absence of unknown timing information about the microphones recording start time and the sources emission time presents a challenge in several applications, including joint microphones and sources localization. Compared with traditional optimization methods that try to estimate unknown timing information directly, low rank property (LRP) contains an additional low rank structure that facilitates a linear constraint of unknown timing information for formulating corresponding low rank structure information, enabling the achievement of global optimal solutions of unknown timing information with suitable initialization. However, the initialization of unknown timing information is random, resulting in local minimal values for estimation of the unknown timing information. In this paper, we propose a combined low rank approximation method to alleviate the effect of random initialization on the estimation of unknown timing information. We define three new variants of LRP supported by proof that allows unknown timing information to benefit from more low rank structure information. Then, by utilizing the low rank structure information from both LRP and proposed variants of LRP, four linear constraints of unknown timing information are presented. Finally, we use the proposed combined low rank approximation algorithm to obtain global optimal solutions of unknown timing information through the four available linear constraints. Experimental results demonstrate superior performance of our method compared to state-of-the-art approaches in terms of recovery rate (the number of successful initialization for any configuration), convergency rate (the number of successfully recovered configurations), and estimation errors of unknown timing information.Comment: 13 pages for main content; 9 pages for proof of proposed low rank properties; 13 figure

    Role of Intermittent Self Catheterization after Cauda Equina Syndrome Surgery.

    Get PDF
    Background: To determine the effectiveness andsafety of intermittent self catheterization in caudaequina patients who have lost the bladder control.Methods : In this prospective study patients withsymptoms and signs of cauda equina syndrome, dueto lumbar disc herniation confirmed by relevantMRI ,were included. Emergency surgery wasperformed and post operatively these patients weretaught the technique of intermittent selfcatheterization. After full aseptic measures patientswere asked to sit on the chair and identify themeatus. Catheter was slowly inserted into thebladder,uptil the urine output was obtained.Pressure on the lower abdomen was applied to helpin emptying the bladder. Nelton catheter wasremoved and was kept in a bottle of clean water.After couple of attempts patients learnt to pass thecatheter. Patient was asked and helped to do thisactivity 3 to 4 times a day. The patient wasdischarged from the hospital only when he/she wasconfident enough to catheterize himself/herself.Initially patients were kept on biweekly follow upand later on monthly basis.Results : Majority (86%) continued to undergointermittent self catheterization, but 14% , elderlypatients, experienced insertion difficulty anddiscontinued intermittent self catheterization. Tenpatients (24%) had bacteriuria during the procedure.Epididymitis was seen in 2%. There were no urethralcomplications suggesting that the self-lubricatingNelton catheters are safe and less traumatic.Conclusion: Intermittent self catheterization is asafe, effective treatment and is associated withimproved quality of life in cauda equina syndromepatients

    Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers

    Get PDF
    © 2018 Ahmad et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods

    Abstraction-Based Outlier Detection for Image Data

    Get PDF
    © 2021, Springer Nature Switzerland AG. Data plays an important role in all stages of training, and usage of machine learning algorithms. Outliers are the samples in data that are generated by a “different mechanism” and belong to unexpected patterns that do not conform to normal behaviour. Outlier detection techniques try to deal with such undesirable events. There have been exceptional success of deep learning over classical methods in computer vision. In recent years a number of works employed the representation learning ability of deep autoencoders or Generative Adversarial Networks for outlier detection. Basically, methods are based on plugging representation techniques to outlier detection methods or directly reported employing reconstruction error as an outlier score. The error distributions of inliers and outliers may be still significantly overlapped. This could be associated with variation of samples inside the class, or cases with high outliers ratios, etc. In these cases, simply thresholding reconstruction errors may lead to misclassification. Although the produced representation is perhaps effective in representing the common features of the normal data, it is not necessarily effective in distinguishing outliers from inliers. We present a method that is based on constructing new features using convolutional variational autoencoder (VAE) and generate abstraction based on these features. To identify anomaly detection we tested two scenarios: utilizing VAE itself as well as using abstractions to train an additional architecture. Results are presented in the form of AUC-ROC using four benchmark datasets
    • 

    corecore