67,328 research outputs found
Deep Dimension Reduction for Supervised Representation Learning
The success of deep supervised learning depends on its automatic data
representation abilities. Among all the characteristics of an ideal
representation for high-dimensional complex data, information preservation, low
dimensionality and disentanglement are the most essential ones. In this work,
we propose a deep dimension reduction (DDR) approach to achieving a good data
representation with these characteristics for supervised learning. At the
population level, we formulate the ideal representation learning task as
finding a nonlinear dimension reduction map that minimizes the sum of losses
characterizing conditional independence and disentanglement. We estimate the
target map at the sample level nonparametrically with deep neural networks. We
derive a bound on the excess risk of the deep nonparametric estimator. The
proposed method is validated via comprehensive numerical experiments and real
data analysis in the context of regression and classification
SAFS: A Deep Feature Selection Approach for Precision Medicine
In this paper, we propose a new deep feature selection method based on deep
architecture. Our method uses stacked auto-encoders for feature representation
in higher-level abstraction. We developed and applied a novel feature learning
approach to a specific precision medicine problem, which focuses on assessing
and prioritizing risk factors for hypertension (HTN) in a vulnerable
demographic subgroup (African-American). Our approach is to use deep learning
to identify significant risk factors affecting left ventricular mass indexed to
body surface area (LVMI) as an indicator of heart damage risk. The results show
that our feature learning and representation approach leads to better results
in comparison with others
On orthogonal projections for dimension reduction and applications in augmented target loss functions for learning problems
The use of orthogonal projections on high-dimensional input and target data
in learning frameworks is studied. First, we investigate the relations between
two standard objectives in dimension reduction, preservation of variance and of
pairwise relative distances. Investigations of their asymptotic correlation as
well as numerical experiments show that a projection does usually not satisfy
both objectives at once. In a standard classification problem we determine
projections on the input data that balance the objectives and compare
subsequent results. Next, we extend our application of orthogonal projections
to deep learning tasks and introduce a general framework of augmented target
loss functions. These loss functions integrate additional information via
transformations and projections of the target data. In two supervised learning
problems, clinical image segmentation and music information classification, the
application of our proposed augmented target loss functions increase the
accuracy
- …