19,299 research outputs found
Separable dictionary learning for convolutional sparse coding via split updates
The increasing ubiquity of Convolutional Sparse Representation techniques for several image processing
tasks (such as object recognition and classification, as well as image denoising) has recently
sparked interest in the use of separable 2D dictionary filter banks (as alternatives to standard nonseparable
dictionaries) for efficient Convolutional Sparse Coding (CSC) implementations. However,
existing methods approximate a set of K non-separable filters via a linear combination of R (R << K)
separable filters, which puts an upper bound on the latter’s quality. Furthermore, this implies the need
to learn first the whole set of non-separable filters, and only then compute the separable set, which is
not optimal from a computational perspective.
In this context, the purpose of the present work is to propose a method to directly learn a set of K
separable dictionary filters from a given image training set by drawing ideas from standard Convolutional
Dictionary Learning (CDL) methods. We show that the separable filters obtained by the proposed
method match the performance of an equivalent number of non-separable filters. Furthermore, the computational
performance of this learning method is shown to be substantially faster than a state-of-the-art
non-separable CDL method when either the image training set or the filter set are large. The method and
results presented here have been published [1] at the 2018 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP 2018). Furthermore, a preliminary approach (mentioned at the
end of Chapter 2) was also published at ICASSP 2017 [2].
The structure of the document is organized as follows. Chapter 1 introduces the problem of interest
and outlines the scope of this work. Chapter 2 provides the reader with a brief summary of the relevant
literature in optimization, CDL and previous use of separable filters. Chapter 3 presents the details of
the proposed method and some implementation highlights. Chapter 4 reports the attained computational
results through several simulations. Chapter 5 summarizes the attained results and draws some final
conclusions.Tesi
Joint Sensing Matrix and Sparsifying Dictionary Optimization for Tensor Compressive Sensing.
Tensor compressive sensing (TCS) is a multidimensional framework of compressive sensing (CS), and it is
advantageous in terms of reducing the amount of storage, easing
hardware implementations, and preserving multidimensional
structures of signals in comparison to a conventional CS system.
In a TCS system, instead of using a random sensing matrix and
a predefined dictionary, the average-case performance can be
further improved by employing an optimized multidimensional
sensing matrix and a learned multilinear sparsifying dictionary.
In this paper, we propose an approach that jointly optimizes
the sensing matrix and dictionary for a TCS system. For the
sensing matrix design in TCS, an extended separable approach
with a closed form solution and a novel iterative nonseparable
method are proposed when the multilinear dictionary is fixed.
In addition, a multidimensional dictionary learning method that
takes advantages of the multidimensional structure is derived,
and the influence of sensing matrices is taken into account in the
learning process. A joint optimization is achieved via alternately
iterating the optimization of the sensing matrix and dictionary.
Numerical experiments using both synthetic data and real images
demonstrate the superiority of the proposed approache
Learning Co-Sparse Analysis Operators with Separable Structures
In the co-sparse analysis model a set of filters is applied to a signal out
of the signal class of interest yielding sparse filter responses. As such, it
may serve as a prior in inverse problems, or for structural analysis of signals
that are known to belong to the signal class. The more the model is adapted to
the class, the more reliable it is for these purposes. The task of learning
such operators for a given class is therefore a crucial problem. In many
applications, it is also required that the filter responses are obtained in a
timely manner, which can be achieved by filters with a separable structure. Not
only can operators of this sort be efficiently used for computing the filter
responses, but they also have the advantage that less training samples are
required to obtain a reliable estimate of the operator. The first contribution
of this work is to give theoretical evidence for this claim by providing an
upper bound for the sample complexity of the learning process. The second is a
stochastic gradient descent (SGD) method designed to learn an analysis operator
with separable structures, which includes a novel and efficient step size
selection rule. Numerical experiments are provided that link the sample
complexity to the convergence speed of the SGD algorithm.Comment: 11 pages double column, 4 figures, 3 table
- …