186,792 research outputs found

    Prototypical Networks for Few-shot Learning

    Full text link
    We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset

    Classification of Multiwavelength Transients with Machine Learning

    Get PDF
    With the advent of powerful telescopes such as the Square Kilometer Array and the Vera C. Rubin Observatory, we are entering an era of multiwavelength transient astronomy that will lead to a dramatic increase in data volume. Machine learning techniques are well suited to address this data challenge and rapidly classify newly detected transients. We present a multiwavelength classification algorithm consisting of three steps: (1) interpolation and augmentation of the data using Gaussian processes; (2) feature extraction using wavelets; and (3) classification with random forests. Augmentation provides improved performance at test time by balancing the classes and adding diversity into the training set. In the first application of machine learning to the classification of real radio transient data, we apply our technique to the Green Bank Interferometer and other radio light curves. We find we are able to accurately classify most of the 11 classes of radio variables and transients after just eight hours of observations, achieving an overall test accuracy of 78 percent. We fully investigate the impact of the small sample size of 82 publicly available light curves and use data augmentation techniques to mitigate the effect. We also show that on a significantly larger simulated representative training set that the algorithm achieves an overall accuracy of 97 percent, illustrating that the method is likely to provide excellent performance on future surveys. Finally, we demonstrate the effectiveness of simultaneous multiwavelength observations by showing how incorporating just one optical data point into the analysis improves the accuracy of the worst performing class by 19 percent.Comment: 16 pages, 12 figure

    Deep Transfer Learning: A new deep learning glitch classification method for advanced LIGO

    Full text link
    The exquisite sensitivity of the advanced LIGO detectors has enabled the detection of multiple gravitational wave signals. The sophisticated design of these detectors mitigates the effect of most types of noise. However, advanced LIGO data streams are contaminated by numerous artifacts known as glitches: non-Gaussian noise transients with complex morphologies. Given their high rate of occurrence, glitches can lead to false coincident detections, obscure and even mimic gravitational wave signals. Therefore, successfully characterizing and removing glitches from advanced LIGO data is of utmost importance. Here, we present the first application of Deep Transfer Learning for glitch classification, showing that knowledge from deep learning algorithms trained for real-world object recognition can be transferred for classifying glitches in time-series based on their spectrogram images. Using the Gravity Spy dataset, containing hand-labeled, multi-duration spectrograms obtained from real LIGO data, we demonstrate that this method enables optimal use of very deep convolutional neural networks for classification given small training datasets, significantly reduces the time for training the networks, and achieves state-of-the-art accuracy above 98.8%, with perfect precision-recall on 8 out of 22 classes. Furthermore, new types of glitches can be classified accurately given few labeled examples with this technique. Once trained via transfer learning, we show that the convolutional neural networks can be truncated and used as excellent feature extractors for unsupervised clustering methods to identify new classes based on their morphology, without any labeled examples. Therefore, this provides a new framework for dynamic glitch classification for gravitational wave detectors, which are expected to encounter new types of noise as they undergo gradual improvements to attain design sensitivity

    Semi-supervised Learning of Fetal Anatomy from Ultrasound

    Full text link
    Semi-supervised learning methods have achieved excellent performance on standard benchmark datasets using very few labelled images. Anatomy classification in fetal 2D ultrasound is an ideal problem setting to test whether these results translate to non-ideal data. Our results indicate that inclusion of a challenging background class can be detrimental and that semi-supervised learning mostly benefits classes that are already distinct, sometimes at the expense of more similar classes

    Texture Segmentation by Evidence Gathering

    No full text
    A new approach to texture segmentation is presented which uses Local Binary Pattern data to provide evidence from which pixels can be classified into texture classes. The proposed algorithm, which we contend to be the first use of evidence gathering in the field of texture classification, uses Generalised Hough Transform style R-tables as unique descriptors for each texture class and an accumulator is used to store votes for each texture class. Tests on the Brodatz database and Berkeley Segmentation Dataset have shown that our algorithm provides excellent results; an average of 86.9% was achieved over 50 tests on 27 Brodatz textures compared with 80.3% achieved by segmentation by histogram comparison centred on each pixel. In addition, our results provide noticeably smoother texture boundaries and reduced noise within texture regions. The concept is also a "higher order" texture descriptor, whereby the arrangement of texture elements is used for classification as well as the frequency of occurrence that is featured in standard texture operators. This results in a unique descriptor for each texture class based on the structure of texture elements within the image, which leads to a homogeneous segmentation, in boundary and area, of texture by this new technique

    Moving Towards Open Set Incremental Learning: Readily Discovering New Authors

    Full text link
    The classification of textual data often yields important information. Most classifiers work in a closed world setting where the classifier is trained on a known corpus, and then it is tested on unseen examples that belong to one of the classes seen during training. Despite the usefulness of this design, often there is a need to classify unseen examples that do not belong to any of the classes on which the classifier was trained. This paper describes the open set scenario where unseen examples from previously unseen classes are handled while testing. This further examines a process of enhanced open set classification with a deep neural network that discovers new classes by clustering the examples identified as belonging to unknown classes, followed by a process of retraining the classifier with newly recognized classes. Through this process the model moves to an incremental learning model where it continuously finds and learns from novel classes of data that have been identified automatically. This paper also develops a new metric that measures multiple attributes of clustering open set data. Multiple experiments across two author attribution data sets demonstrate the creation an incremental model that produces excellent results.Comment: Accepted to Future of Information and Communication Conference (FICC) 202

    Liquid chromatographic cpproach for the discrimination and classification of cava samples based on the phenolic composition using chemometric methods

    Get PDF
    Phenolic profiles obtained by liquid chromatography with UV/vis detection were here exploited to classify cava samples from the protected designation of origin Cava. Wine samples belonging to various classes which differed in grape varieties, blends and fermentation processes were studied based on profiling and fingerprinting approaches. Hence, concentrations of relevant phenolic acids and chromatograms registered at 310 nm were preliminarily examined by Principal Component Analysis (PCA) to extract information on cava classes. It was found that various hydroxybenzoic and hydroxycinnamic acids such as gallic, gentisic, caffeic or caftaric acids were up- or down-expressed depending on the wine varieties. Additionally, Partial Least Squares Discriminant Analysis (PLS-DA) was applied to classify the cava samples according to varietal origins and blends. The classification models were established using well-known wines as the calibration standards. Subsequently, models were applied to assign unknown samples to their corresponding classes. Excellent classification rates were obtained thus proving the potentiality of the proposed approach for characterization and authentication purposes

    von Mises-Fisher Mixture Model-based Deep learning: Application to Face Verification

    Full text link
    A number of pattern recognition tasks, \textit{e.g.}, face verification, can be boiled down to classification or clustering of unit length directional feature vectors whose distance can be simply computed by their angle. In this paper, we propose the von Mises-Fisher (vMF) mixture model as the theoretical foundation for an effective deep-learning of such directional features and derive a novel vMF Mixture Loss and its corresponding vMF deep features. The proposed vMF feature learning achieves the characteristics of discriminative learning, \textit{i.e.}, compacting the instances of the same class while increasing the distance of instances from different classes. Moreover, it subsumes a number of popular loss functions as well as an effective method in deep learning, namely normalization. We conduct extensive experiments on face verification using 4 different challenging face datasets, \textit{i.e.}, LFW, YouTube faces, CACD and IJB-A. Results show the effectiveness and excellent generalization ability of the proposed approach as it achieves state-of-the-art results on the LFW, YouTube faces and CACD datasets and competitive results on the IJB-A dataset.Comment: Under revie

    ROI Regularization for Semi-supervised and Supervised Learning

    Full text link
    We propose ROI regularization (ROIreg) as a semi-supervised learning method for image classification. ROIreg focuses on the maximum probability of a posterior probability distribution g(x) obtained when inputting an unlabeled data sample x into a convolutional neural network (CNN). ROIreg divides the pixel set of x into multiple blocks and evaluates, for each block, its contribution to the maximum probability. A masked data sample x_ROI is generated by replacing blocks with relatively small degrees of contribution with random images. Then, ROIreg trains CNN so that g(x_ROI ) does not change as much as possible from g(x). Therefore, ROIreg can be said to refine the classification ability of CNN more. On the other hand, Virtual Adverserial Training (VAT), which is an excellent semi-supervised learning method, generates data sample x_VAT by perturbing x in the direction in which g(x) changes most. Then, VAT trains CNN so that g(x_VAT ) does not change from g(x) as much as possible. Therefore, VAT can be said to be a method to improve CNN's weakness. Thus, ROIreg and VAT have complementary training effects. In fact, the combination of VAT and ROIreg improves the results obtained when using VAT or ROIreg alone. This combination also improves the state-of-the-art on "SVHN with and without data augmentation" and "CIFAR-10 without data augmentation". We also propose a method called ROI augmentation (ROIaug) as a method to apply ROIreg to data augmentation in supervised learning. However, the evaluation function used there is different from the standard cross-entropy. ROIaug improves the performance of supervised learning for both SVHN and CIFAR-10. Finally, we investigate the performance degradation of VAT and VAT+ROIreg when data samples not belonging to classification classes are included in unlabeled data.Comment: 14 pages, 7 tables, 2 figure

    Beyond cross-entropy: learning highly separable feature distributions for robust and accurate classification

    Get PDF
    Deep learning has shown outstanding performance in several applications including image classification. However, deep classifiers are known to be highly vulnerable to adversarial attacks, in that a minor perturbation of the input can easily lead to an error. Providing robustness to adversarial attacks is a very challenging task especially in problems involving a large number of classes, as it typically comes at the expense of an accuracy decrease. In this work, we propose the Gaussian class-conditional simplex (GCCS) loss: a novel approach for training deep robust multiclass classifiers that provides adversarial robustness while at the same time achieving or even surpassing the classification accuracy of state-of-the-art methods. Differently from other frameworks, the proposed method learns a mapping of the input classes onto target distributions in a latent space such that the classes are linearly separable. Instead of maximizing the likelihood of target labels for individual samples, our objective function pushes the network to produce feature distributions yielding high inter-class separation. The mean values of the distributions are centered on the vertices of a simplex such that each class is at the same distance from every other class. We show that the regularization of the latent space based on our approach yields excellent classification accuracy and inherently provides robustness to multiple adversarial attacks, both targeted and untargeted, outperforming state-of-the-art approaches over challenging datasets
    corecore