14,932 research outputs found
Decomposing feature-level variation with Covariate Gaussian Process Latent Variable Models
The interpretation of complex high-dimensional data typically requires the
use of dimensionality reduction techniques to extract explanatory
low-dimensional representations. However, in many real-world problems these
representations may not be sufficient to aid interpretation on their own, and
it would be desirable to interpret the model in terms of the original features
themselves. Our goal is to characterise how feature-level variation depends on
latent low-dimensional representations, external covariates, and non-linear
interactions between the two. In this paper, we propose to achieve this through
a structured kernel decomposition in a hybrid Gaussian Process model which we
call the Covariate Gaussian Process Latent Variable Model (c-GPLVM). We
demonstrate the utility of our model on simulated examples and applications in
disease progression modelling from high-dimensional gene expression data in the
presence of additional phenotypes. In each setting we show how the c-GPLVM can
extract low-dimensional structures from high-dimensional data sets whilst
allowing a breakdown of feature-level variability that is not present in other
commonly used dimensionality reduction approaches
Stratified Transfer Learning for Cross-domain Activity Recognition
In activity recognition, it is often expensive and time-consuming to acquire
sufficient activity labels. To solve this problem, transfer learning leverages
the labeled samples from the source domain to annotate the target domain which
has few or none labels. Existing approaches typically consider learning a
global domain shift while ignoring the intra-affinity between classes, which
will hinder the performance of the algorithms. In this paper, we propose a
novel and general cross-domain learning framework that can exploit the
intra-affinity of classes to perform intra-class knowledge transfer. The
proposed framework, referred to as Stratified Transfer Learning (STL), can
dramatically improve the classification accuracy for cross-domain activity
recognition. Specifically, STL first obtains pseudo labels for the target
domain via majority voting technique. Then, it performs intra-class knowledge
transfer iteratively to transform both domains into the same subspaces.
Finally, the labels of target domain are obtained via the second annotation. To
evaluate the performance of STL, we conduct comprehensive experiments on three
large public activity recognition datasets~(i.e. OPPORTUNITY, PAMAP2, and UCI
DSADS), which demonstrates that STL significantly outperforms other
state-of-the-art methods w.r.t. classification accuracy (improvement of 7.68%).
Furthermore, we extensively investigate the performance of STL across different
degrees of similarities and activity levels between domains. And we also
discuss the potential of STL in other pervasive computing applications to
provide empirical experience for future research.Comment: 10 pages; accepted by IEEE PerCom 2018; full paper. (camera-ready
version
- …