305,068 research outputs found
Correlative Channel-Aware Fusion for Multi-View Time Series Classification
Multi-view time series classification (MVTSC) aims to improve the performance
by fusing the distinctive temporal information from multiple views. Existing
methods mainly focus on fusing multi-view information at an early stage, e.g.,
by learning a common feature subspace among multiple views. However, these
early fusion methods may not fully exploit the unique temporal patterns of each
view in complicated time series. Moreover, the label correlations of multiple
views, which are critical to boost-ing, are usually under-explored for the
MVTSC problem. To address the aforementioned issues, we propose a Correlative
Channel-Aware Fusion (C2AF) network. First, C2AF extracts comprehensive and
robust temporal patterns by a two-stream structured encoder for each view, and
captures the intra-view and inter-view label correlations with a graph-based
correlation matrix. Second, a channel-aware learnable fusion mechanism is
implemented through convolutional neural networks to further explore the global
correlative patterns. These two steps are trained end-to-end in the proposed
C2AF network. Extensive experimental results on three real-world datasets
demonstrate the superiority of our approach over the state-of-the-art methods.
A detailed ablation study is also provided to show the effectiveness of each
model component
Learning from Multiple Outlooks
We propose a novel problem formulation of learning a single task when the
data are provided in different feature spaces. Each such space is called an
outlook, and is assumed to contain both labeled and unlabeled data. The
objective is to take advantage of the data from all the outlooks to better
classify each of the outlooks. We devise an algorithm that computes optimal
affine mappings from different outlooks to a target outlook by matching moments
of the empirical distributions. We further derive a probabilistic
interpretation of the resulting algorithm and a sample complexity bound
indicating how many samples are needed to adequately find the mapping. We
report the results of extensive experiments on activity recognition tasks that
show the value of the proposed approach in boosting performance.Comment: with full proofs of theorems and all experiment
Multi-View Face Recognition From Single RGBD Models of the Faces
This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks
- …