11,654 research outputs found
Multi-View Class Incremental Learning
Multi-view learning (MVL) has gained great success in integrating information
from multiple perspectives of a dataset to improve downstream task performance.
To make MVL methods more practical in an open-ended environment, this paper
investigates a novel paradigm called multi-view class incremental learning
(MVCIL), where a single model incrementally classifies new classes from a
continual stream of views, requiring no access to earlier views of data.
However, MVCIL is challenged by the catastrophic forgetting of old information
and the interference with learning new concepts. To address this, we first
develop a randomization-based representation learning technique serving for
feature extraction to guarantee their separate view-optimal working states,
during which multiple views belonging to a class are presented sequentially;
Then, we integrate them one by one in the orthogonality fusion subspace spanned
by the extracted features; Finally, we introduce selective weight consolidation
for learning-without-forgetting decision-making while encountering new classes.
Extensive experiments on synthetic and real-world datasets validate the
effectiveness of our approach.Comment: 34 pages,4 figures. Under revie
Multimodal Subspace Support Vector Data Description
In this paper, we propose a novel method for projecting data from multiple
modalities to a new subspace optimized for one-class classification. The
proposed method iteratively transforms the data from the original feature space
of each modality to a new common feature space along with finding a joint
compact description of data coming from all the modalities. For data in each
modality, we define a separate transformation to map the data from the
corresponding feature space to the new optimized subspace by exploiting the
available information from the class of interest only. We also propose
different regularization strategies for the proposed method and provide both
linear and non-linear formulations. The proposed Multimodal Subspace Support
Vector Data Description outperforms all the competing methods using data from a
single modality or fusing data from all modalities in four out of five
datasets.Comment: 26 pages manuscript (6 tables, 2 figures), 24 pages supplementary
material (27 tables, 10 figures). The manuscript and supplementary material
are combined as a single .pdf (50 pages) fil
Correlative Channel-Aware Fusion for Multi-View Time Series Classification
Multi-view time series classification (MVTSC) aims to improve the performance
by fusing the distinctive temporal information from multiple views. Existing
methods mainly focus on fusing multi-view information at an early stage, e.g.,
by learning a common feature subspace among multiple views. However, these
early fusion methods may not fully exploit the unique temporal patterns of each
view in complicated time series. Moreover, the label correlations of multiple
views, which are critical to boost-ing, are usually under-explored for the
MVTSC problem. To address the aforementioned issues, we propose a Correlative
Channel-Aware Fusion (C2AF) network. First, C2AF extracts comprehensive and
robust temporal patterns by a two-stream structured encoder for each view, and
captures the intra-view and inter-view label correlations with a graph-based
correlation matrix. Second, a channel-aware learnable fusion mechanism is
implemented through convolutional neural networks to further explore the global
correlative patterns. These two steps are trained end-to-end in the proposed
C2AF network. Extensive experimental results on three real-world datasets
demonstrate the superiority of our approach over the state-of-the-art methods.
A detailed ablation study is also provided to show the effectiveness of each
model component
- …