34,307 research outputs found
Decoding the Encoding of Functional Brain Networks: an fMRI Classification Comparison of Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA), and Sparse Coding Algorithms
Brain networks in fMRI are typically identified using spatial independent
component analysis (ICA), yet mathematical constraints such as sparse coding
and positivity both provide alternate biologically-plausible frameworks for
generating brain networks. Non-negative Matrix Factorization (NMF) would
suppress negative BOLD signal by enforcing positivity. Spatial sparse coding
algorithms ( Regularized Learning and K-SVD) would impose local
specialization and a discouragement of multitasking, where the total observed
activity in a single voxel originates from a restricted number of possible
brain networks.
The assumptions of independence, positivity, and sparsity to encode
task-related brain networks are compared; the resulting brain networks for
different constraints are used as basis functions to encode the observed
functional activity at a given time point. These encodings are decoded using
machine learning to compare both the algorithms and their assumptions, using
the time series weights to predict whether a subject is viewing a video,
listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects.
For classifying cognitive activity, the sparse coding algorithm of
Regularized Learning consistently outperformed 4 variations of ICA across
different numbers of networks and noise levels (p0.001). The NMF algorithms,
which suppressed negative BOLD signal, had the poorest accuracy. Within each
algorithm, encodings using sparser spatial networks (containing more
zero-valued voxels) had higher classification accuracy (p0.001). The success
of sparse coding algorithms may suggest that algorithms which enforce sparse
coding, discourage multitasking, and promote local specialization may capture
better the underlying source processes than those which allow inexhaustible
local processes such as ICA
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
Robust Independent Component Analysis via Minimum Divergence Estimation
Independent component analysis (ICA) has been shown to be useful in many
applications. However, most ICA methods are sensitive to data contamination and
outliers. In this article we introduce a general minimum U-divergence framework
for ICA, which covers some standard ICA methods as special cases. Within the
U-family we further focus on the gamma-divergence due to its desirable property
of super robustness, which gives the proposed method gamma-ICA. Statistical
properties and technical conditions for the consistency of gamma-ICA are
rigorously studied. In the limiting case, it leads to a necessary and
sufficient condition for the consistency of MLE-ICA. This necessary and
sufficient condition is weaker than the condition known in the literature.
Since the parameter of interest in ICA is an orthogonal matrix, a geometrical
algorithm based on gradient flows on special orthogonal group is introduced to
implement gamma-ICA. Furthermore, a data-driven selection for the gamma value,
which is critical to the achievement of gamma-ICA, is developed. The
performance, especially the robustness, of gamma-ICA in comparison with
standard ICA methods is demonstrated through experimental studies using
simulated data and image data.Comment: 7 figure
- …