142 research outputs found
Learning incoherent dictionaries for sparse approximation using iterative projections and rotations
This work was supported by the Queen Mary University of London School Studentship, the EU FET-Open project FP7-
ICT-225913-SMALL. Sparse Models, Algorithms and Learning for Large-scale data and a Leadership Fellowship from the UK
Engineering and Physical Sciences Research Council (EPSRC)
Learning Incoherent Subspaces: Classification via Incoherent Dictionary Learning
In this article we present the supervised iterative projections and rotations (s-ipr) algorithm, a method for learning discriminative incoherent subspaces from data. We derive s-ipr as a supervised extension of our previously proposed iterative projections and rotations (ipr) algorithm for incoherent dictionary learning, and we employ it to learn incoherent sub-spaces that model signals belonging to different classes. We test our method as a feature transform for supervised classification, first by visualising transformed features from a synthetic dataset and from the ‘iris’ dataset, then by using the resulting features in a classification experiment
Learning incoherent subspaces for classification via supervised iterative projections and rotations
In this paper we present the supervised iterative projections and rotations (S-IPR) algorithm, a method to optimise a set of discriminative subspaces for supervised classification. We show how the proposed technique is based on our previous unsupervised iterative projections and rotations (IPR) algorithm for incoherent dictionary learning, and how projecting the features onto the learned sub-spaces can be employed as a feature transform algorithm in the context of classification. Numerical experiments on the FISHERIRIS and on the USPS datasets, and a comparison with the PCA and LDA methods for feature transform demonstrates the value of the proposed technique and its potential as a tool for machine learning. © 2013 IEEE
Sparse Approximation and Dictionary Learning with Applications to Audio Signals
PhDOver-complete transforms have recently become the focus of a wide wealth of research in
signal processing, machine learning, statistics and related fields. Their great modelling
flexibility allows to find sparse representations and approximations of data that in turn
prove to be very efficient in a wide range of applications. Sparse models express signals as
linear combinations of a few basis functions called atoms taken from a so-called dictionary.
Finding the optimal dictionary from a set of training signals of a given class is the objective
of dictionary learning and the main focus of this thesis. The experimental evidence
presented here focuses on the processing of audio signals, and the role of sparse algorithms
in audio applications is accordingly highlighted.
The first main contribution of this thesis is the development of a pitch-synchronous
transform where the frame-by-frame analysis of audio data is adapted so that each frame
analysing periodic signals contains an integer number of periods. This algorithm presents
a technique for adapting transform parameters to the audio signal to be analysed, it
is shown to improve the sparsity of the representation if compared to a non pitchsynchronous
approach and further evaluated in the context of source separation by binary
masking.
A second main contribution is the development of a novel model and relative algorithm
for dictionary learning of convolved signals, where the observed variables are sparsely approximated
by the atoms contained in a convolved dictionary. An algorithm is devised to
learn the impulse response applied to the dictionary and experimental results on synthetic
data show the superior approximation performance of the proposed method compared to
a state-of-the-art dictionary learning algorithm.
Finally, a third main contribution is the development of methods for learning dictionaries
that are both well adapted to a training set of data and mutually incoherent. Two
novel algorithms namely the incoherent k-svd and the iterative projections and rotations
(ipr) algorithm are introduced and compared to different techniques published in the
literature in a sparse approximation context. The ipr algorithm in particular is shown
to outperform the benchmark techniques in learning very incoherent dictionaries while
maintaining a good signal-to-noise ratio of the representation
Fast Orthonormal Sparsifying Transforms Based on Householder Reflectors
Dictionary learning is the task of determining a data-dependent transform
that yields a sparse representation of some observed data. The dictionary
learning problem is non-convex, and usually solved via computationally complex
iterative algorithms. Furthermore, the resulting transforms obtained generally
lack structure that permits their fast application to data. To address this
issue, this paper develops a framework for learning orthonormal dictionaries
which are built from products of a few Householder reflectors. Two algorithms
are proposed to learn the reflector coefficients: one that considers a
sequential update of the reflectors and one with a simultaneous update of all
reflectors that imposes an additional internal orthogonal constraint. The
proposed methods have low computational complexity and are shown to converge to
local minimum points which can be described in terms of the spectral properties
of the matrices involved. The resulting dictionaries balance between the
computational complexity and the quality of the sparse representations by
controlling the number of Householder reflectors in their product. Simulations
of the proposed algorithms are shown in the image processing setting where
well-known fast transforms are available for comparisons. The proposed
algorithms have favorable reconstruction error and the advantage of a fast
implementation relative to the classical, unstructured, dictionaries
- …