649 research outputs found

    Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features

    Full text link
    One-class support vector machine (OC-SVM) for a long time has been one of the most effective anomaly detection methods and extensively adopted in both research as well as industrial applications. The biggest issue for OC-SVM is yet the capability to operate with large and high-dimensional datasets due to optimization complexity. Those problems might be mitigated via dimensionality reduction techniques such as manifold learning or autoencoder. However, previous work often treats representation learning and anomaly prediction separately. In this paper, we propose autoencoder based one-class support vector machine (AE-1SVM) that brings OC-SVM, with the aid of random Fourier features to approximate the radial basis kernel, into deep learning context by combining it with a representation learning architecture and jointly exploit stochastic gradient descent to obtain end-to-end training. Interestingly, this also opens up the possible use of gradient-based attribution methods to explain the decision making for anomaly detection, which has ever been challenging as a result of the implicit mappings between the input space and the kernel space. To the best of our knowledge, this is the first work to study the interpretability of deep learning in anomaly detection. We evaluate our method on a wide range of unsupervised anomaly detection tasks in which our end-to-end training architecture achieves a performance significantly better than the previous work using separate training.Comment: Accepted at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) 201

    Geometric deep learning for shape analysis: extending deep learning techniques to non-Euclidean manifolds

    Get PDF
    The past decade in computer vision research has witnessed the re-emergence of artificial neural networks (ANN), and in particular convolutional neural network (CNN) techniques, allowing to learn powerful feature representations from large collections of data. Nowadays these techniques are better known under the umbrella term deep learning and have achieved a breakthrough in performance in a wide range of image analysis applications such as image classification, segmentation, and annotation. Nevertheless, when attempting to apply deep learning paradigms to 3D shapes one has to face fundamental differences between images and geometric objects. The main difference between images and 3D shapes is the non-Euclidean nature of the latter. This implies that basic operations, such as linear combination or convolution, that are taken for granted in the Euclidean case, are not even well defined on non-Euclidean domains. This happens to be the major obstacle that so far has precluded the successful application of deep learning methods on non-Euclidean geometric data. The goal of this thesis is to overcome this obstacle by extending deep learning tecniques (including, but not limiting to CNNs) to non-Euclidean domains. We present different approaches providing such extension and test their effectiveness in the context of shape similarity and correspondence applications. The proposed approaches are evaluated on several challenging experiments, achieving state-of-the- art results significantly outperforming other methods. To the best of our knowledge, this thesis presents different original contributions. First, this work pioneers the generalization of CNNs to discrete manifolds. Second, it provides an alternative formulation of the spectral convolution operation in terms of the windowed Fourier transform to overcome the drawbacks of the Fourier one. Third, it introduces a spatial domain formulation of convolution operation using patch operators and several ways of their construction (geodesic, anisotropic diffusion, mixture of Gaussians). Fourth, at the moment of publication the proposed approaches achieved state-of-the-art results in different computer graphics and vision applications such as shape descriptors and correspondence

    Multispectral image classification from axiomatic locally finite spaces-based segmentation

    Get PDF
    Geographical object-based image analysis (GEOBIA) usually starts defining coarse geometric space elements, i.e. image-objects, by grouping near pixels based on (a, b)-connected graphs as neighbourhood definitions. In such an approach, however, topological axioms needed to ensure a correct representation of connectedness relationships can not be satisfied. Thus, conventional image-object boundaries definition presents ambiguities because one-dimensional contours are represented by two-dimensional pixels. In this paper, segmentation is conducted using a novel approach based on axiomatic locally finite spaces (provided by Cartesian complexes) and their linked oriented matroids. For the test, the ALFS-based image segments were classified using the support vector machine (SVM) algorithm using directional filter response as an additional channel. The proposed approach uses a multi-scale approach for the segmentation, which includes multi-scale texture and spectral affinity analysis in boundary definition. The proposed approach was evaluated comparatively with conventional pixel representation on a small subset of GEOBIA2016 benchmark dataset. Results show that classification accuracy is increased in comparison to a conventional pixel segmentation.El análisis de imagenes basado en objetos geográficos (GEOBIA por su sigla en inglés) comienza generalmente definiendo elementos más gruesos del espacio geométrico u objetos de imagen, agrupando píxeles cercanos con base en grafos (a, b)-conectados como definiciones de vecindario. En este enfoque, sin embargo, pueden no cumplirse algunos axiomas topológicos requeridos para garantizar una correcta representación de las relaciones de conexión. Por lo tanto, la definición convencional de límites de objetos de imagen, presenta ambigüedades debido a que los contornos unidimensionales están representados por píxeles bidimensionales. En este trabajo, la segmentación se lleva a cabo mediante un nuevo enfoque basado en espacios axiomáticos localmente finitos (proporcionados por complejos cartesianos) y sus matroides orientados asociados. Para probar el enfoque propuesto, los segmentos de la imagen basada en ALFS fueron clasificados usando el algoritmo de máquina de soporte vectorial (SVM por su sigla en inglés) usando la respuesta a filtros direccionales como un canal adicional. El enfoque propuesto utiliza un enfoque multiescala para la segmentación, que incluye análisis de textura y de afinidad espectral en la definición de límite. La propuesta se evaluó comparativamente con la representación de píxeles convencionales en un pequeño subconjunto del conjunto de datos de referencia GEOBIA2016. Los resultados muestran que la exactitud de la clasificación se incrementa en comparación con la segmentación convencional de pixeles

    A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. <it>peaks</it>) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics.</p> <p>Results</p> <p>We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962.</p> <p>Conclusion</p> <p>We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from <url>http://medeaserver.isa.cnr.it/dacierno/spectracode.htm</url></p

    Idealized computational models for auditory receptive fields

    Full text link
    This paper presents a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to enable invariance of receptive field responses under natural sound transformations and ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or the combination of a time-causal generalized Gammatone filter over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table

    Novelty detection for semantic place categorization

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201

    Verified lifting of stencil computations

    Get PDF
    This paper demonstrates a novel combination of program synthesis and verification to lift stencil computations from low-level Fortran code to a high-level summary expressed using a predicate language. The technique is sound and mostly automated, and leverages counter-example guided inductive synthesis (CEGIS) to find provably correct translations. Lifting existing code to a high-performance description language has a number of benefits, including maintainability and performance portability. For example, our experiments show that the lifted summaries can enable domain specific compilers to do a better job of parallelization as compared to an off-the-shelf compiler working on the original code, and can even support fully automatic migration to hardware accelerators such as GPUs. We have implemented verified lifting in a system called STNG and have evaluated it using microbenchmarks, mini-apps, and real-world applications. We demonstrate the benefits of verified lifting by first automatically summarizing Fortran source code into a high-level predicate language, and subsequently translating the lifted summaries into Halide, with the translated code achieving median performance speedups of 4.1X and up to 24X for non-trivial stencils as compared to the original implementation.United States. Department of Energy. Office of Science (Award DE-SC0008923)United States. Department of Energy. Office of Science (Award DE-SC0005288

    Learning Neural Point Processes with Latent Graphs

    Get PDF
    Neural point processes (NPPs) employ neural networks to capture complicated dynamics of asynchronous event sequences. Existing NPPs feed all history events into neural networks, assuming that all event types contribute to the prediction of the target type. How- ever, this assumption can be problematic because in reality some event types do not contribute to the predictions of another type. To correct this defect, we learn to omit those types of events that do not contribute to the prediction of one target type during the formulation of NPPs. Towards this end, we simultaneously consider the tasks of (1) finding event types that contribute to predictions of the target types and (2) learning a NPP model from event se- quences. For the former, we formulate a latent graph, with event types being vertices and non-zero contributing relationships being directed edges; then we propose a probabilistic graph generator, from which we sample a latent graph. For the latter, the sampled graph can be readily used as a plug-in to modify an existing NPP model. Because these two tasks are nested, we propose to optimize the model parameters through bilevel programming, and develop an efficient solution based on truncated gradient back-propagation. Experimental results on both synthetic and real-world datasets show the improved performance against state-of-the-art baselines. This work removes disturbance of non-contributing event types with the aid of a validation procedure, similar to the practice to mitigate overfitting used when training machine learning models
    • …
    corecore