3,933 research outputs found
Common Representation Learning Using Step-based Correlation Multi-Modal CNN
Deep learning techniques have been successfully used in learning a common
representation for multi-view data, wherein the different modalities are
projected onto a common subspace. In a broader perspective, the techniques used
to investigate common representation learning falls under the categories of
canonical correlation-based approaches and autoencoder based approaches. In
this paper, we investigate the performance of deep autoencoder based methods on
multi-view data. We propose a novel step-based correlation multi-modal CNN
(CorrMCNN) which reconstructs one view of the data given the other while
increasing the interaction between the representations at each hidden layer or
every intermediate step. Finally, we evaluate the performance of the proposed
model on two benchmark datasets - MNIST and XRMB. Through extensive
experiments, we find that the proposed model achieves better performance than
the current state-of-the-art techniques on joint common representation learning
and transfer learning tasks.Comment: Accepted in Asian Conference of Pattern Recognition (ACPR-2017
VIGAN: Missing View Imputation with Generative Adversarial Networks
In an era when big data are becoming the norm, there is less concern with the
quantity but more with the quality and completeness of the data. In many
disciplines, data are collected from heterogeneous sources, resulting in
multi-view or multi-modal datasets. The missing data problem has been
challenging to address in multi-view data analysis. Especially, when certain
samples miss an entire view of data, it creates the missing view problem.
Classic multiple imputations or matrix completion methods are hardly effective
here when no information can be based on in the specific view to impute data
for such samples. The commonly-used simple method of removing samples with a
missing view can dramatically reduce sample size, thus diminishing the
statistical power of a subsequent analysis. In this paper, we propose a novel
approach for view imputation via generative adversarial networks (GANs), which
we name by VIGAN. This approach first treats each view as a separate domain and
identifies domain-to-domain mappings via a GAN using randomly-sampled data from
each view, and then employs a multi-modal denoising autoencoder (DAE) to
reconstruct the missing view from the GAN outputs based on paired data across
the views. Then, by optimizing the GAN and DAE jointly, our model enables the
knowledge integration for domain mappings and view correspondences to
effectively recover the missing view. Empirical results on benchmark datasets
validate the VIGAN approach by comparing against the state of the art. The
evaluation of VIGAN in a genetic study of substance use disorders further
proves the effectiveness and usability of this approach in life science.Comment: 10 pages, 8 figures, conferenc
Evidence Transfer for Improving Clustering Tasks Using External Categorical Evidence
In this paper we introduce evidence transfer for clustering, a deep learning
method that can incrementally manipulate the latent representations of an
autoencoder, according to external categorical evidence, in order to improve a
clustering outcome. By evidence transfer we define the process by which the
categorical outcome of an external, auxiliary task is exploited to improve a
primary task, in this case representation learning for clustering. Our proposed
method makes no assumptions regarding the categorical evidence presented, nor
the structure of the latent space. We compare our method, against the baseline
solution by performing k-means clustering before and after its deployment.
Experiments with three different kinds of evidence show that our method
effectively manipulates the latent representations when introduced with real
corresponding evidence, while remaining robust when presented with low quality
evidence
auDeep: Unsupervised Learning of Representations from Audio with Deep Recurrent Neural Networks
auDeep is a Python toolkit for deep unsupervised representation learning from
acoustic data. It is based on a recurrent sequence to sequence autoencoder
approach which can learn representations of time series data by taking into
account their temporal dynamics. We provide an extensive command line interface
in addition to a Python API for users and developers, both of which are
comprehensively documented and publicly available at
https://github.com/auDeep/auDeep. Experimental results indicate that auDeep
features are competitive with state-of-the art audio classification
- …