355 research outputs found
Recommended from our members
Parallelisation of greedy algorithms for compressive sensing reconstruction
Compressive Sensing (CS) is a technique which allows a signal to be compressed at the same
time as it is captured. The process of capturing and simultaneously compressing the signal is
represented as linear sampling, which can encompass a variety of physical processes or signal
processing. Instead of explicitly identifying redundancies in the source signal, CS relies on the
property of sparsity in order to reconstruct the compressed signal. While linear sampling is
much less burdensome than conventional compression, this is more than made up for by the high
computational cost of reconstructing a signal which has been captured using CS. Even when
using some of the fastest reconstruction techniques, known as greedy pursuits, reconstruction
of large problems can pose a significant burden, consuming a great deal of memory as well as
compute time.
Parallel computing is the foundation of the field of High Performance Computing (HPC).
Modern supercomputers are generally composed of large clusters of standard servers, with a
dedicated low-latency high-bandwidth interconnect network. On such a cluster, an appropriately
written program can harness vast quantities of memory and computational power. However, in
order to exploit a parallel compute resource, an algorithm usually has to be redesigned from
the ground up. In this thesis I describe the development of parallel variants of two algorithms
commonly used in CS reconstruction, Matching Pursuit (MP) and Orthogonal Matching Pursuit
(OMP), resulting in the new distributed compute algorithms DistMP and DistOMP. I present
the results from experiments showing how DistMP and DistOMP can utilise a compute cluster
to solve CS problems much more quickly than a single computer could alone. Speed-up of as
much as a factor of 76 is observed with DistMP when utilising 210 workers across 14 servers,
compared to a single worker. Finally, I demonstrate how DistOMP can solve a problem with a
429GB equivalent sampling matrix in as little as 62 minutes using a 16-node compute cluster.Funded by an ICASE award from the Engineering and Physical Sciences Research Council, with sponsorship provided by Thales Research and Technology
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing
Turbo Bayesian Compressed Sensing
Compressed sensing (CS) theory specifies a new signal acquisition approach, potentially allowing the acquisition of signals at a much lower data rate than the Nyquist sampling rate. In CS, the signal is not directly acquired but reconstructed from a few measurements. One of the key problems in CS is how to recover the original signal from measurements in the presence of noise. This dissertation addresses signal reconstruction problems in CS. First, a feedback structure and signal recovery algorithm, orthogonal pruning pursuit (OPP), is proposed to exploit the prior knowledge to reconstruct the signal in the noise-free situation. To handle the noise, a noise-aware signal reconstruction algorithm based on Bayesian Compressed Sensing (BCS) is developed. Moreover, a novel Turbo Bayesian Compressed Sensing (TBCS) algorithm is developed for joint signal reconstruction by exploiting both spatial and temporal redundancy. Then, the TBCS algorithm is applied to a UWB positioning system for achieving mm-accuracy with low sampling rate ADCs. Finally, hardware implementation of BCS signal reconstruction on FPGAs and GPUs is investigated. Implementation on GPUs and FPGAs of parallel Cholesky decomposition, which is a key component of BCS, is explored. Simulation results on software and hardware have demonstrated that OPP and TBCS outperform previous approaches, with UWB positioning accuracy improved by 12.8x. The accelerated computation helps enable real-time application of this work
Accelerated deconvolution of radio interferometric images using orthogonal matching pursuit and graphics hardware
Deconvolution of native radio interferometric images constitutes a major computational component of the radio astronomy imaging process. An efficient and robust deconvolution operation is essential for reconstruction of the true sky signal from measured correlator data. Traditionally, radio astronomers have mostly used the CLEAN algorithm, and variants thereof. However, the techniques of compressed sensing provide a mathematically rigorous framework within which deconvolution of radio interferometric images can be implemented. We present an accelerated implementation of the orthogonal matching pursuit (OMP) algorithm (a compressed sensing method) that makes use of graphics processing unit (GPU) hardware, and show significant accuracy improvements over the standard CLEAN. In particular, we show that OMP correctly identifies more sources than CLEAN, identifying up to 82% of the sources in 100 test images, while CLEAN only identifies up to 61% of the sources. In addition, the residual after source extraction is 2.7 times lower for OMP than for CLEAN. Furthermore, the GPU implementation of OMP performs around 23 times faster than a 4-core CPU
Computational approaches in compressed sensing
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014.This thesis aims to provide a summary on computational approaches to solving the
Compressed Sensing problem. The theoretical problem of solving systems of linear
equations has long been investigated in academic literature. A relatively new field,
Compressed Sensing is an application of such a problem. Specifically, with the ability to
change the way in which we obtain and process signals. Under the assumption of sparse
signals, Compressed Sensing is able to recover signals sampled at a rate much lower than
that of the current Shannon/Nyquist sampling rate. The primary goal of this thesis, is to
describe major algorithms currently used in the Compressed Sensing problem. This is done
as a means to provide the reader with sufficient up to date knowledge on current
approaches as well as their means of implementation, on central processing units (CPUs)
and graphical processing units (GPUs), when considering computational concerns such as
computational time, storage requirements and parallelisability
Deep Networks for Compressed Image Sensing
The compressed sensing (CS) theory has been successfully applied to image
compression in the past few years as most image signals are sparse in a certain
domain. Several CS reconstruction models have been recently proposed and
obtained superior performance. However, there still exist two important
challenges within the CS theory. The first one is how to design a sampling
mechanism to achieve an optimal sampling efficiency, and the second one is how
to perform the reconstruction to get the highest quality to achieve an optimal
signal recovery. In this paper, we try to deal with these two problems with a
deep network. First of all, we train a sampling matrix via the network training
instead of using a traditional manually designed one, which is much appropriate
for our deep network based reconstruct process. Then, we propose a deep network
to recover the image, which imitates traditional compressed sensing
reconstruction processes. Experimental results demonstrate that our deep
networks based CS reconstruction method offers a very significant quality
improvement compared against state of the art ones.Comment: This paper has been accepted by the IEEE International Conference on
Multimedia and Expo (ICME) 201
Low-Cost Compressive Sensing for Color Video and Depth
A simple and inexpensive (low-power and low-bandwidth) modification is made
to a conventional off-the-shelf color video camera, from which we recover
{multiple} color frames for each of the original measured frames, and each of
the recovered frames can be focused at a different depth. The recovery of
multiple frames for each measured frame is made possible via high-speed coding,
manifested via translation of a single coded aperture; the inexpensive
translation is constituted by mounting the binary code on a piezoelectric
device. To simultaneously recover depth information, a {liquid} lens is
modulated at high speed, via a variable voltage. Consequently, during the
aforementioned coding process, the liquid lens allows the camera to sweep the
focus through multiple depths. In addition to designing and implementing the
camera, fast recovery is achieved by an anytime algorithm exploiting the
group-sparsity of wavelet/DCT coefficients.Comment: 8 pages, CVPR 201
- …