14,292 research outputs found
Robustness Verification of Support Vector Machines
We study the problem of formally verifying the robustness to adversarial
examples of support vector machines (SVMs), a major machine learning model for
classification and regression tasks. Following a recent stream of works on
formal robustness verification of (deep) neural networks, our approach relies
on a sound abstract version of a given SVM classifier to be used for checking
its robustness. This methodology is parametric on a given numerical abstraction
of real values and, analogously to the case of neural networks, needs neither
abstract least upper bounds nor widening operators on this abstraction. The
standard interval domain provides a simple instantiation of our abstraction
technique, which is enhanced with the domain of reduced affine forms, which is
an efficient abstraction of the zonotope abstract domain. This robustness
verification technique has been fully implemented and experimentally evaluated
on SVMs based on linear and nonlinear (polynomial and radial basis function)
kernels, which have been trained on the popular MNIST dataset of images and on
the recent and more challenging Fashion-MNIST dataset. The experimental results
of our prototype SVM robustness verifier appear to be encouraging: this
automated verification is fast, scalable and shows significantly high
percentages of provable robustness on the test set of MNIST, in particular
compared to the analogous provable robustness of neural networks
Classification of Radiology Reports Using Neural Attention Models
The electronic health record (EHR) contains a large amount of
multi-dimensional and unstructured clinical data of significant operational and
research value. Distinguished from previous studies, our approach embraces a
double-annotated dataset and strays away from obscure "black-box" models to
comprehensive deep learning models. In this paper, we present a novel neural
attention mechanism that not only classifies clinically important findings.
Specifically, convolutional neural networks (CNN) with attention analysis are
used to classify radiology head computed tomography reports based on five
categories that radiologists would account for in assessing acute and
communicable findings in daily practice. The experiments show that our CNN
attention models outperform non-neural models, especially when trained on a
larger dataset. Our attention analysis demonstrates the intuition behind the
classifier's decision by generating a heatmap that highlights attended terms
used by the CNN model; this is valuable when potential downstream medical
decisions are to be performed by human experts or the classifier information is
to be used in cohort construction such as for epidemiological studies
Sparse Linear Models applied to Power Quality Disturbance Classification
Power quality (PQ) analysis describes the non-pure electric signals that are
usually present in electric power systems. The automatic recognition of PQ
disturbances can be seen as a pattern recognition problem, in which different
types of waveform distortion are differentiated based on their features.
Similar to other quasi-stationary signals, PQ disturbances can be decomposed
into time-frequency dependent components by using time-frequency or time-scale
transforms, also known as dictionaries. These dictionaries are used in the
feature extraction step in pattern recognition systems. Short-time Fourier,
Wavelets and Stockwell transforms are some of the most common dictionaries used
in the PQ community, aiming to achieve a better signal representation. To the
best of our knowledge, previous works about PQ disturbance classification have
been restricted to the use of one among several available dictionaries. Taking
advantage of the theory behind sparse linear models (SLM), we introduce a
sparse method for PQ representation, starting from overcomplete dictionaries.
In particular, we apply Group Lasso. We employ different types of
time-frequency (or time-scale) dictionaries to characterize the PQ
disturbances, and evaluate their performance under different pattern
recognition algorithms. We show that the SLM reduce the PQ classification
complexity promoting sparse basis selection, and improving the classification
accuracy
Learning sound representations using trainable COPE feature extractors
Sound analysis research has mainly been focused on speech and music
processing. The deployed methodologies are not suitable for analysis of sounds
with varying background noise, in many cases with very low signal-to-noise
ratio (SNR). In this paper, we present a method for the detection of patterns
of interest in audio signals. We propose novel trainable feature extractors,
which we call COPE (Combination of Peaks of Energy). The structure of a COPE
feature extractor is determined using a single prototype sound pattern in an
automatic configuration process, which is a type of representation learning. We
construct a set of COPE feature extractors, configured on a number of training
patterns. Then we take their responses to build feature vectors that we use in
combination with a classifier to detect and classify patterns of interest in
audio signals. We carried out experiments on four public data sets: MIVIA audio
events, MIVIA road events, ESC-10 and TU Dortmund data sets. The results that
we achieved (recognition rate equal to 91.71% on the MIVIA audio events, 94% on
the MIVIA road events, 81.25% on the ESC-10 and 94.27% on the TU Dortmund)
demonstrate the effectiveness of the proposed method and are higher than the
ones obtained by other existing approaches. The COPE feature extractors have
high robustness to variations of SNR. Real-time performance is achieved even
when the value of a large number of features is computed.Comment: Accepted for publication in Pattern Recognitio
Feature extraction based on bio-inspired model for robust emotion recognition
Emotional state identification is an important issue to achieve more natural speech interactive systems. Ideally, these systems should also be able to work in real environments in which generally exist some kind of noise. Several bio-inspired representations have been applied to artificial systems for speech processing under noise conditions. In this work, an auditory signal representation is used to obtain a novel bio-inspired set of features for emotional speech signals. These characteristics, together with other spectral and prosodic features, are used for emotion recognition under noise conditions. Neural models were trained as classifiers and results were compared to the well-known mel-frequency cepstral coefficients. Results show that using the proposed representations, it is possible to significantly improve the robustness of an emotion recognition system. The results were also validated in a speaker independent scheme and with two emotional speech corpora.Fil: Albornoz, Enrique Marcelo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Rufiner, Hugo Leonardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentin
- …