5,100 research outputs found
Learning Co-Sparse Analysis Operators with Separable Structures
In the co-sparse analysis model a set of filters is applied to a signal out
of the signal class of interest yielding sparse filter responses. As such, it
may serve as a prior in inverse problems, or for structural analysis of signals
that are known to belong to the signal class. The more the model is adapted to
the class, the more reliable it is for these purposes. The task of learning
such operators for a given class is therefore a crucial problem. In many
applications, it is also required that the filter responses are obtained in a
timely manner, which can be achieved by filters with a separable structure. Not
only can operators of this sort be efficiently used for computing the filter
responses, but they also have the advantage that less training samples are
required to obtain a reliable estimate of the operator. The first contribution
of this work is to give theoretical evidence for this claim by providing an
upper bound for the sample complexity of the learning process. The second is a
stochastic gradient descent (SGD) method designed to learn an analysis operator
with separable structures, which includes a novel and efficient step size
selection rule. Numerical experiments are provided that link the sample
complexity to the convergence speed of the SGD algorithm.Comment: 11 pages double column, 4 figures, 3 table
Convolutional Dictionary Learning: Acceleration and Convergence
Convolutional dictionary learning (CDL or sparsifying CDL) has many
applications in image processing and computer vision. There has been growing
interest in developing efficient algorithms for CDL, mostly relying on the
augmented Lagrangian (AL) method or the variant alternating direction method of
multipliers (ADMM). When their parameters are properly tuned, AL methods have
shown fast convergence in CDL. However, the parameter tuning process is not
trivial due to its data dependence and, in practice, the convergence of AL
methods depends on the AL parameters for nonconvex CDL problems. To moderate
these problems, this paper proposes a new practically feasible and convergent
Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The
BPG-M-based CDL is investigated with different block updating schemes and
majorization matrix designs, and further accelerated by incorporating some
momentum coefficient formulas and restarting techniques. All of the methods
investigated incorporate a boundary artifacts removal (or, more generally,
sampling) operator in the learning model. Numerical experiments show that,
without needing any parameter tuning process, the proposed BPG-M approach
converges more stably to desirable solutions of lower objective values than the
existing state-of-the-art ADMM algorithm and its memory-efficient variant do.
Compared to the ADMM approaches, the BPG-M method using a multi-block updating
scheme is particularly useful in single-threaded CDL algorithm handling large
datasets, due to its lower memory requirement and no polynomial computational
complexity. Image denoising experiments show that, for relatively strong
additive white Gaussian noise, the filters learned by BPG-M-based CDL
outperform those trained by the ADMM approach.Comment: 21 pages, 7 figures, submitted to IEEE Transactions on Image
Processin
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
System Level Synthesis
This article surveys the System Level Synthesis framework, which presents a
novel perspective on constrained robust and optimal controller synthesis for
linear systems. We show how SLS shifts the controller synthesis task from the
design of a controller to the design of the entire closed loop system, and
highlight the benefits of this approach in terms of scalability and
transparency. We emphasize two particular applications of SLS, namely
large-scale distributed optimal control and robust control. In the case of
distributed control, we show how SLS allows for localized controllers to be
computed, extending robust and optimal control methods to large-scale systems
under practical and realistic assumptions. In the case of robust control, we
show how SLS allows for novel design methodologies that, for the first time,
quantify the degradation in performance of a robust controller due to model
uncertainty -- such transparency is key in allowing robust control methods to
interact, in a principled way, with modern techniques from machine learning and
statistical inference. Throughout, we emphasize practical and efficient
computational solutions, and demonstrate our methods on easy to understand case
studies.Comment: To appear in Annual Reviews in Contro
- âŠ