1,109 research outputs found
Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix Factorization
Nonnegative Matrix Factorization (NMF) has been continuously evolving in
several areas like pattern recognition and information retrieval methods. It
factorizes a matrix into a product of 2 low-rank non-negative matrices that
will define parts-based, and linear representation of nonnegative data.
Recently, Graph regularized NMF (GrNMF) is proposed to find a compact
representation,which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure. In GNMF, an affinity graph is constructed
from the original data space to encode the geometrical information. In this
paper, we propose a novel idea which engages a Multiple Kernel Learning
approach into refining the graph structure that reflects the factorization of
the matrix and the new data space. The GrNMF is improved by utilizing the graph
refined by the kernel learning, and then a novel kernel learning method is
introduced under the GrNMF framework. Our approach shows encouraging results of
the proposed algorithm in comparison to the state-of-the-art clustering
algorithms like NMF, GrNMF, SVD etc.Comment: This paper has been withdrawn by the author due to the terrible
writin
Fiber Orientation Estimation Guided by a Deep Network
Diffusion magnetic resonance imaging (dMRI) is currently the only tool for
noninvasively imaging the brain's white matter tracts. The fiber orientation
(FO) is a key feature computed from dMRI for fiber tract reconstruction.
Because the number of FOs in a voxel is usually small, dictionary-based sparse
reconstruction has been used to estimate FOs with a relatively small number of
diffusion gradients. However, accurate FO estimation in regions with complex FO
configurations in the presence of noise can still be challenging. In this work
we explore the use of a deep network for FO estimation in a dictionary-based
framework and propose an algorithm named Fiber Orientation Reconstruction
guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a
smaller dictionary encoding coarse basis FOs to represent the diffusion
signals. To estimate the mixture fractions of the dictionary atoms (and thus
coarse FOs), a deep network is designed specifically for solving the sparse
reconstruction problem. Here, the smaller dictionary is used to reduce the
computational cost of training. Second, the coarse FOs inform the final FO
estimation, where a larger dictionary encoding dense basis FOs is used and a
weighted l1-norm regularized least squares problem is solved to encourage FOs
that are consistent with the network output. FORDN was evaluated and compared
with state-of-the-art algorithms that estimate FOs using sparse reconstruction
on simulated and real dMRI data, and the results demonstrate the benefit of
using a deep network for FO estimation.Comment: A shorter version is accepted by MICCAI 201
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
- …