15,770 research outputs found
A Tensor-Based Dictionary Learning Approach to Tomographic Image Reconstruction
We consider tomographic reconstruction using priors in the form of a
dictionary learned from training images. The reconstruction has two stages:
first we construct a tensor dictionary prior from our training data, and then
we pose the reconstruction problem in terms of recovering the expansion
coefficients in that dictionary. Our approach differs from past approaches in
that a) we use a third-order tensor representation for our images and b) we
recast the reconstruction problem using the tensor formulation. The dictionary
learning problem is presented as a non-negative tensor factorization problem
with sparsity constraints. The reconstruction problem is formulated in a convex
optimization framework by looking for a solution with a sparse representation
in the tensor dictionary. Numerical results show that our tensor formulation
leads to very sparse representations of both the training images and the
reconstructions due to the ability of representing repeated features compactly
in the dictionary.Comment: 29 page
Provable Bounds for Learning Some Deep Representations
We give algorithms with provable guarantees that learn a class of deep nets
in the generative model view popularized by Hinton and others. Our generative
model is an node multilayer neural net that has degree at most
for some and each edge has a random edge weight in . Our
algorithm learns {\em almost all} networks in this class with polynomial
running time. The sample complexity is quadratic or cubic depending upon the
details of the model.
The algorithm uses layerwise learning. It is based upon a novel idea of
observing correlations among features and using these to infer the underlying
edge structure via a global graph recovery procedure. The analysis of the
algorithm reveals interesting structure of neural networks with random edge
weights.Comment: The first 18 pages serve as an extended abstract and a 36 pages long
technical appendix follow
Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing
Wavelets have been used extensively for several years now in astronomy for
many purposes, ranging from data filtering and deconvolution, to star and
galaxy detection or cosmic ray removal. More recent sparse representations such
ridgelets or curvelets have also been proposed for the detection of anisotropic
features such cosmic strings in the cosmic microwave background.
We review in this paper a range of methods based on sparsity that have been
proposed for astronomical data analysis. We also discuss what is the impact of
Compressed Sensing, the new sampling theory, in astronomy for collecting the
data, transferring them to the earth or reconstructing an image from incomplete
measurements.Comment: Submitted. Full paper will figures available at
http://jstarck.free.fr/IEEE09_SparseAstro.pd
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Exact Recovery Conditions for Sparse Representations with Partial Support Information
We address the exact recovery of a k-sparse vector in the noiseless setting
when some partial information on the support is available. This partial
information takes the form of either a subset of the true support or an
approximate subset including wrong atoms as well. We derive a new sufficient
and worst-case necessary (in some sense) condition for the success of some
procedures based on lp-relaxation, Orthogonal Matching Pursuit (OMP) and
Orthogonal Least Squares (OLS). Our result is based on the coherence "mu" of
the dictionary and relaxes the well-known condition mu<1/(2k-1) ensuring the
recovery of any k-sparse vector in the non-informed setup. It reads
mu<1/(2k-g+b-1) when the informed support is composed of g good atoms and b
wrong atoms. We emphasize that our condition is complementary to some
restricted-isometry based conditions by showing that none of them implies the
other.
Because this mutual coherence condition is common to all procedures, we carry
out a finer analysis based on the Null Space Property (NSP) and the Exact
Recovery Condition (ERC). Connections are established regarding the
characterization of lp-relaxation procedures and OMP in the informed setup.
First, we emphasize that the truncated NSP enjoys an ordering property when p
is decreased. Second, the partial ERC for OMP (ERC-OMP) implies in turn the
truncated NSP for the informed l1 problem, and the truncated NSP for p<1.Comment: arXiv admin note: substantial text overlap with arXiv:1211.728
- …