6,147 research outputs found
Robust 1-Bit Compressed Sensing via Hinge Loss Minimization
This work theoretically studies the problem of estimating a structured
high-dimensional signal from noisy -bit Gaussian
measurements. Our recovery approach is based on a simple convex program which
uses the hinge loss function as data fidelity term. While such a risk
minimization strategy is very natural to learn binary output models, such as in
classification, its capacity to estimate a specific signal vector is largely
unexplored. A major difficulty is that the hinge loss is just piecewise linear,
so that its "curvature energy" is concentrated in a single point. This is
substantially different from other popular loss functions considered in signal
estimation, e.g., the square or logistic loss, which are at least locally
strongly convex. It is therefore somewhat unexpected that we can still prove
very similar types of recovery guarantees for the hinge loss estimator, even in
the presence of strong noise. More specifically, our non-asymptotic error
bounds show that stable and robust reconstruction of can be achieved with
the optimal oversampling rate in terms of the number of
measurements . Moreover, we permit a wide class of structural assumptions on
the ground truth signal, in the sense that can belong to an arbitrary
bounded convex set . The proofs of our main results
rely on some recent advances in statistical learning theory due to Mendelson.
In particular, we invoke an adapted version of Mendelson's small ball method
that allows us to establish a quadratic lower bound on the error of the first
order Taylor approximation of the empirical hinge loss function
Dependent Nonparametric Bayesian Group Dictionary Learning for online reconstruction of Dynamic MR images
In this paper, we introduce a dictionary learning based approach applied to
the problem of real-time reconstruction of MR image sequences that are highly
undersampled in k-space. Unlike traditional dictionary learning, our method
integrates both global and patch-wise (local) sparsity information and
incorporates some priori information into the reconstruction process. Moreover,
we use a Dependent Hierarchical Beta-process as the prior for the group-based
dictionary learning, which adaptively infers the dictionary size and the
sparsity of each patch; and also ensures that similar patches are manifested in
terms of similar dictionary atoms. An efficient numerical algorithm based on
the alternating direction method of multipliers (ADMM) is also presented.
Through extensive experimental results we show that our proposed method
achieves superior reconstruction quality, compared to the other state-of-the-
art DL-based methods
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
- …