680,069 research outputs found
Multimodal Subspace Support Vector Data Description
In this paper, we propose a novel method for projecting data from multiple
modalities to a new subspace optimized for one-class classification. The
proposed method iteratively transforms the data from the original feature space
of each modality to a new common feature space along with finding a joint
compact description of data coming from all the modalities. For data in each
modality, we define a separate transformation to map the data from the
corresponding feature space to the new optimized subspace by exploiting the
available information from the class of interest only. We also propose
different regularization strategies for the proposed method and provide both
linear and non-linear formulations. The proposed Multimodal Subspace Support
Vector Data Description outperforms all the competing methods using data from a
single modality or fusing data from all modalities in four out of five
datasets.Comment: 26 pages manuscript (6 tables, 2 figures), 24 pages supplementary
material (27 tables, 10 figures). The manuscript and supplementary material
are combined as a single .pdf (50 pages) fil
Subspace Support Vector Data Description
This paper proposes a novel method for solving one-class classification
problems. The proposed approach, namely Subspace Support Vector Data
Description, maps the data to a subspace that is optimized for one-class
classification. In that feature space, the optimal hypersphere enclosing the
target class is then determined. The method iteratively optimizes the data
mapping along with data description in order to define a compact class
representation in a low-dimensional feature space. We provide both linear and
non-linear mappings for the proposed method. Experiments on 14 publicly
available datasets indicate that the proposed Subspace Support Vector Data
Description provides better performance compared to baselines and other
recently proposed one-class classification methods.Comment: 6 pages, submitted/accepted, ICPR 201
Peak Criterion for Choosing Gaussian Kernel Bandwidth in Support Vector Data Description
Support Vector Data Description (SVDD) is a machine-learning technique used
for single class classification and outlier detection. SVDD formulation with
kernel function provides a flexible boundary around data. The value of kernel
function parameters affects the nature of the data boundary. For example, it is
observed that with a Gaussian kernel, as the value of kernel bandwidth is
lowered, the data boundary changes from spherical to wiggly. The spherical data
boundary leads to underfitting, and an extremely wiggly data boundary leads to
overfitting. In this paper, we propose empirical criterion to obtain good
values of the Gaussian kernel bandwidth parameter. This criterion provides a
smooth boundary that captures the essential geometric features of the data
Newton Method-based Subspace Support Vector Data Description
In this paper, we present an adaptation of Newton's method for the
optimization of Subspace Support Vector Data Description (S-SVDD). The
objective of S-SVDD is to map the original data to a subspace optimized for
one-class classification, and the iterative optimization process of data
mapping and description in S-SVDD relies on gradient descent. However, gradient
descent only utilizes first-order information, which may lead to suboptimal
results. To address this limitation, we leverage Newton's method to enhance
data mapping and data description for an improved optimization of subspace
learning-based one-class classification. By incorporating this auxiliary
information, Newton's method offers a more efficient strategy for subspace
learning in one-class classification as compared to gradient-based
optimization. The paper discusses the limitations of gradient descent and the
advantages of using Newton's method in subspace learning for one-class
classification tasks. We provide both linear and nonlinear formulations of
Newton's method-based optimization for S-SVDD. In our experiments, we explored
both the minimization and maximization strategies of the objective. The results
demonstrate that the proposed optimization strategy outperforms the
gradient-based S-SVDD in most cases.Comment: 8 pages, 2 figures, 2 tables, 1 Algorithm. Accepted at IEEE Symposium
Series on Computational Intelligence 202
Quantum support vector data description for anomaly detection
Anomaly detection is a critical problem in data analysis and pattern
recognition, finding applications in various domains. We introduce quantum
support vector data description (QSVDD), an unsupervised learning algorithm
designed for anomaly detection. QSVDD utilizes a shallow-depth quantum circuit
to learn a minimum-volume hypersphere that tightly encloses normal data,
tailored for the constraints of noisy intermediate-scale quantum (NISQ)
computing. Simulation results on the MNIST and Fashion MNIST image datasets
demonstrate that QSVDD outperforms both quantum autoencoder and deep
learning-based approaches under similar training conditions. Notably, QSVDD
offers the advantage of training an extremely small number of model parameters,
which grows logarithmically with the number of input qubits. This enables
efficient learning with a simple training landscape, presenting a compact
quantum machine learning model with strong performance for anomaly detection.Comment: 14 pages, 5 figure
Fault detection in operating helicopter drive train components based on support vector data description
The objective of the paper is to develop a vibration-based automated procedure dealing with early detection of
mechanical degradation of helicopter drive train components using Health and Usage Monitoring Systems (HUMS) data. An anomaly-detection method devoted to the quantification of the degree of deviation of the mechanical state of a component from its nominal condition is developed. This method is based on an Anomaly Score (AS) formed by a combination of a set of statistical features correlated with specific damages, also known as Condition Indicators (CI), thus the operational variability is implicitly included in the model through the CI correlation. The problem of fault detection is then recast as a one-class classification problem in the space spanned by a set of CI, with the aim of a global differentiation between normal and anomalous observations, respectively related to healthy and supposedly faulty components. In this paper, a procedure based on an efficient one-class classification method that does not require any assumption on the data distribution, is used. The core of such an approach is the Support Vector Data Description (SVDD), that allows an efficient data description without the need of a significant amount of statistical data. Several analyses have been carried out in order to validate the proposed procedure, using flight vibration data collected from a H135, formerly known as EC135, servicing helicopter, for which micro-pitting damage on a gear was detected by HUMS and assessed through visual inspection. The capability of the proposed approach of providing better trade-off between false alarm rates and missed detection rates with respect to individual CI and to the AS obtained assuming jointly-Gaussian-distributed CI has been also analysed
- …