3 research outputs found
CentralNet: a Multilayer Approach for Multimodal Fusion
This paper proposes a novel multimodal fusion approach, aiming to produce
best possible decisions by integrating information coming from multiple media.
While most of the past multimodal approaches either work by projecting the
features of different modalities into the same space, or by coordinating the
representations of each modality through the use of constraints, our approach
borrows from both visions. More specifically, assuming each modality can be
processed by a separated deep convolutional network, allowing to take decisions
independently from each modality, we introduce a central network linking the
modality specific networks. This central network not only provides a common
feature embedding but also regularizes the modality specific networks through
the use of multi-task learning. The proposed approach is validated on 4
different computer vision tasks on which it consistently improves the accuracy
of existing multimodal fusion approaches
Multimodal Subspace Support Vector Data Description
In this paper, we propose a novel method for projecting data from multiple
modalities to a new subspace optimized for one-class classification. The
proposed method iteratively transforms the data from the original feature space
of each modality to a new common feature space along with finding a joint
compact description of data coming from all the modalities. For data in each
modality, we define a separate transformation to map the data from the
corresponding feature space to the new optimized subspace by exploiting the
available information from the class of interest only. We also propose
different regularization strategies for the proposed method and provide both
linear and non-linear formulations. The proposed Multimodal Subspace Support
Vector Data Description outperforms all the competing methods using data from a
single modality or fusing data from all modalities in four out of five
datasets.Comment: 26 pages manuscript (6 tables, 2 figures), 24 pages supplementary
material (27 tables, 10 figures). The manuscript and supplementary material
are combined as a single .pdf (50 pages) fil
Subspace Support Vector Data Description and Extensions
Machine learning deals with discovering the knowledge that governs the learning process. The science of machine learning helps create techniques that enhance the capabilities of a system through the use of data. Typical machine learning techniques identify or predict different patterns in the data. In classification tasks, a machine learning model is trained using some training data to identify the unknown function that maps the input data to the output labels. The classification task gets challenging if the data from some categories are either unavailable or so diverse that they cannot be modelled statistically. For example, to train a model for anomaly detection, it is usually challenging to collect anomalous data for training, but the normal data is available in abundance. In such cases, it is possible to use One-Class Classification (OCC) techniques where the model is trained by using data only from one class.
OCC algorithms are practical in situations where it is vital to identify one of the categories, but the examples from that specific category are scarce. Numerous OCC techniques have been proposed in the literature that model the data in the given feature space; however, such data can be high-dimensional or may not provide discriminative information for classification. In order to avoid the curse of dimensionality, standard dimensionality reduction techniques are commonly used as a preprocessing step in many machine learning algorithms. Principal Component Analysis (PCA) is an example of a widely used algorithm to transform data into a subspace suitable for the task at hand while maintaining the meaningful features of a given dataset.
This thesis provides a new paradigm that jointly optimizes a subspace and data description for one-class classification via Support Vector Data Description (SVDD). We initiated the idea of subspace learning for one class classification by proposing a novel Subspace Support Vector Data Description (SSVDD) method, which was further extended to Ellipsoidal Subspace Support Vector Data Description (ESSVDD). ESSVDD generalizes SSVDD for a hypersphere by using ellipsoidal data description and it converges faster than SSVDD. It is important to train a joint model for multimodal data when data is collected from multiple sources. Therefore, we also proposed a multimodal approach, namely Multimodal Subspace Support Vector Data Description (MSSVDD) for transforming the data from multiple modalities to a common shared space for OCC. An important contribution of this thesis is to provide a framework unifying the subspace learning methods for SVDD. The proposed Graph-Embedded Subspace Support Vector Data Description (GESSVDD) framework helps revealing novel insights into the previously proposed methods and allows deriving novel variants that incorporate different optimization goals.
The main focus of the thesis is on generic novel methods which can be adapted to different application domains. We experimented with standard datasets from different domains such as robotics, healthcare, and economics and achieved better performance than competing methods in most of the cases. We also proposed a taxa identification framework for rare benthic macroinvertebrates. Benthic macroinvertebrate taxa distribution is typically very imbalanced. The amounts of training images for the rarest classes are too low for properly training deep learning-based methods, while these rarest classes can be central in biodiversity monitoring. We show that the classic one-class classifiers in general, and the proposed methods in particular, can enhance a deep neural network classification performance for imbalanced datasets