94,697 research outputs found
Principal manifolds and graphs in practice: from molecular biology to dynamical systems
We present several applications of non-linear data modeling, using principal
manifolds and principal graphs constructed using the metaphor of elasticity
(elastic principal graph approach). These approaches are generalizations of the
Kohonen's self-organizing maps, a class of artificial neural networks. On
several examples we show advantages of using non-linear objects for data
approximation in comparison to the linear ones. We propose four numerical
criteria for comparing linear and non-linear mappings of datasets into the
spaces of lower dimension. The examples are taken from comparative political
science, from analysis of high-throughput data in molecular biology, from
analysis of dynamical systems.Comment: 12 pages, 9 figure
Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction
Graph Signal Processing (GSP) is a promising framework to analyze
multi-dimensional neuroimaging datasets, while taking into account both the
spatial and functional dependencies between brain signals. In the present work,
we apply dimensionality reduction techniques based on graph representations of
the brain to decode brain activity from real and simulated fMRI datasets. We
introduce seven graphs obtained from a) geometric structure and/or b)
functional connectivity between brain areas at rest, and compare them when
performing dimension reduction for classification. We show that mixed graphs
using both a) and b) offer the best performance. We also show that graph
sampling methods perform better than classical dimension reduction including
Principal Component Analysis (PCA) and Independent Component Analysis (ICA).Comment: 5 pages, GlobalSIP 201
On orthogonal projections for dimension reduction and applications in augmented target loss functions for learning problems
The use of orthogonal projections on high-dimensional input and target data
in learning frameworks is studied. First, we investigate the relations between
two standard objectives in dimension reduction, preservation of variance and of
pairwise relative distances. Investigations of their asymptotic correlation as
well as numerical experiments show that a projection does usually not satisfy
both objectives at once. In a standard classification problem we determine
projections on the input data that balance the objectives and compare
subsequent results. Next, we extend our application of orthogonal projections
to deep learning tasks and introduce a general framework of augmented target
loss functions. These loss functions integrate additional information via
transformations and projections of the target data. In two supervised learning
problems, clinical image segmentation and music information classification, the
application of our proposed augmented target loss functions increase the
accuracy
- …