200 research outputs found
Partitioned Compressive Sensing with Neighbor-Weighted Decoding
Compressive sensing has gained momentum in recent years as an exciting new theory in signal processing with several useful applications. It states that signals known to have a sparse representation may be encoded and later reconstructed using a small number of measurements, approximately proportional to the signal s sparsity rather than its size. This paper addresses a critical problem that arises when scaling compressive sensing to signals of large length: that the time required for decoding becomes prohibitively long, and that decoding is not easily parallelized. We describe a method for partitioned compressive sensing, by which we divide a large signal into smaller blocks that may be decoded in parallel. However, since this process requires a signi cant increase in the number of measurements needed for exact signal reconstruction, we focus on mitigating artifacts that arise due to partitioning in approximately reconstructed signals. Given an error-prone partitioned decoding, we use large magnitude components that are detected with highest accuracy to in uence the decoding of neighboring blocks, and call this approach neighbor-weighted decoding. We show that, for applications with a prede ned error threshold, our method can be used in conjunction with partitioned compressive sensing to improve decoding speed, requiring fewer additional measurements than unweighted or locally-weighted decoding.Engineering and Applied Science
Evaluating Effect of Block Size in Compressed Sensing for Grayscale Images
Compressed sensing is an evolving methodology that enables sampling at sub-Nyquist rates and still provides decent signal reconstruction. During the last decade, the reported works have suggested to improve time efficiency by adopting Block based Compressed Sensing (BCS) and reconstruction performance improvement through new algorithms. A trade-off is required between the time efficiency and reconstruction quality. In this paper we have evaluated the significance of block size in BCS to improve reconstruction performance for grayscale images. A parameter variant of BCS [15] based sampling followed by reconstruction through Smoothed Projected Landweber (SPL) technique [16] involving use of Weiner smoothing filter and iterative hard thresholding is applied in this paper. The BCS variant is used to evaluate the effect of block size on image reconstruction quality by carrying out extensive testing on 9200 images acquired from online resources provided by Caltech101 [6], University of Granada [7] and Florida State University [8]. The experimentation showed some consistent results which can improve reconstruction performance in all BCS frameworks including BCS-SPL [17] and its variants [19], [27]. Firstly, the effect of varying block size (4x4, 8x8, 16x16, 32x32 and 64x64) results in changing the Peak Signal to Noise Ratio (PSNR) of reconstructed images from at least 1 dB to a maximum of 16 dB. This challenges the common notion that bigger block sizes always result in better reconstruction performance. Secondly, the variation in reconstruction quality with changing block size is mostly dependent on the image visual contents. Thirdly, images having similar visual contents, irrespective of the size, e.g., those from the same category of Caltech101 [6] gave majority vote for the same Optimum Block Size (OBS). These focused notes may help improve BCS based image capturing at many of the existing applications. For example, experimental results suggest using block size of 8x8 or 16x16 to capture facial identity using BCS. Fourthly, the average processing time taken for BCS and reconstruction through SPL with Lapped transform of Discrete Cosine Transform as the sparifying basis remained 300 milli-seconds for block size of 4x4 to 5 seconds for block size of 64x64. Since the processing time variation remains less than 5 seconds, selecting the OBS may not affect the time constraint in many applications. Analysis reveals that no particular block size is able to provide optimum reconstruction for all images with varying nature of visual contents. Therefore, the selection of block size should be made specific to the particular type of application images depending upon their visual contents
Coded Aperture Hyperspectral Image Reconstruction
This article belongs to the Special Issue Computational Spectral Imaging[Abstract] In this work, we study and analyze the reconstruction of hyperspectral images that are sampled with a CASSI device. The sensing procedure was modeled with the help of the CS theory, which enabled efficient mechanisms for the reconstruction of the hyperspectral images from their compressive measurements. In particular, we considered and compared four different type of estimation algorithms: OMP, GPSR, LASSO, and IST. Furthermore, the large dimensions of hyperspectral images required the implementation of a practical block CASSI model to reconstruct the images with an acceptable delay and affordable computational cost. In order to consider the particularities of the block model and the dispersive effects in the CASSI-like sensing procedure, the problem was reformulated, as well as the construction of the variables involved. For this practical CASSI setup, we evaluated the performance of the overall system by considering the aforementioned algorithms and the different factors that impacted the reconstruction procedure. Finally, the obtained results were analyzed and discussed from a practical perspective.This work was funded by the Xunta de Galicia (by Grant ED431C 2020/15 and Grant ED431G 2019/01 to support the Centro de Investigación de Galicia “CITIC”), the Agencia Estatal de Investigación of Spain (by Grants RED2018-102668-T and PID2019-104958RB-C42), and the ERDF funds of the EU (FEDER Galicia 2014-2020 and AEI/FEDER Programs, UE).Xunta de Galicia; ED431C 2020/15Xunta de Galicia; ED431G 2019/0
Sparse signal representation, sampling, and recovery in compressive sensing frameworks
Compressive sensing allows the reconstruction of original signals from a much smaller number of samples as compared to the Nyquist sampling rate. The effectiveness of compressive sensing motivated the researchers for its deployment in a variety of application areas. The use of an efficient sampling matrix for high-performance recovery algorithms improves the performance of the compressive sensing framework significantly. This paper presents the underlying concepts of compressive sensing as well as previous work done in targeted domains in accordance with the various application areas. To develop prospects within the available functional blocks of compressive sensing frameworks, a diverse range of application areas are investigated. The three fundamental elements of a compressive sensing framework (signal sparsity, subsampling, and reconstruction) are thoroughly reviewed in this work by becoming acquainted with the key research gaps previously identified by the research community. Similarly, the basic mathematical formulation is used to outline some primary performance evaluation metrics for 1D and 2D compressive sensing.Web of Science10850188500
Sparse Approximation and Dictionary Learning with Applications to Audio Signals
PhDOver-complete transforms have recently become the focus of a wide wealth of research in
signal processing, machine learning, statistics and related fields. Their great modelling
flexibility allows to find sparse representations and approximations of data that in turn
prove to be very efficient in a wide range of applications. Sparse models express signals as
linear combinations of a few basis functions called atoms taken from a so-called dictionary.
Finding the optimal dictionary from a set of training signals of a given class is the objective
of dictionary learning and the main focus of this thesis. The experimental evidence
presented here focuses on the processing of audio signals, and the role of sparse algorithms
in audio applications is accordingly highlighted.
The first main contribution of this thesis is the development of a pitch-synchronous
transform where the frame-by-frame analysis of audio data is adapted so that each frame
analysing periodic signals contains an integer number of periods. This algorithm presents
a technique for adapting transform parameters to the audio signal to be analysed, it
is shown to improve the sparsity of the representation if compared to a non pitchsynchronous
approach and further evaluated in the context of source separation by binary
masking.
A second main contribution is the development of a novel model and relative algorithm
for dictionary learning of convolved signals, where the observed variables are sparsely approximated
by the atoms contained in a convolved dictionary. An algorithm is devised to
learn the impulse response applied to the dictionary and experimental results on synthetic
data show the superior approximation performance of the proposed method compared to
a state-of-the-art dictionary learning algorithm.
Finally, a third main contribution is the development of methods for learning dictionaries
that are both well adapted to a training set of data and mutually incoherent. Two
novel algorithms namely the incoherent k-svd and the iterative projections and rotations
(ipr) algorithm are introduced and compared to different techniques published in the
literature in a sparse approximation context. The ipr algorithm in particular is shown
to outperform the benchmark techniques in learning very incoherent dictionaries while
maintaining a good signal-to-noise ratio of the representation
A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity
The richness of natural images makes the quest for optimal representations in
image processing and computer vision challenging. The latter observation has
not prevented the design of image representations, which trade off between
efficiency and complexity, while achieving accurate rendering of smooth regions
as well as reproducing faithful contours and textures. The most recent ones,
proposed in the past decade, share an hybrid heritage highlighting the
multiscale and oriented nature of edges and patterns in images. This paper
presents a panorama of the aforementioned literature on decompositions in
multiscale, multi-orientation bases or dictionaries. They typically exhibit
redundancy to improve sparsity in the transformed domain and sometimes its
invariance with respect to simple geometric deformations (translation,
rotation). Oriented multiscale dictionaries extend traditional wavelet
processing and may offer rotation invariance. Highly redundant dictionaries
require specific algorithms to simplify the search for an efficient (sparse)
representation. We also discuss the extension of multiscale geometric
decompositions to non-Euclidean domains such as the sphere or arbitrary meshed
surfaces. The etymology of panorama suggests an overview, based on a choice of
partially overlapping "pictures". We hope that this paper will contribute to
the appreciation and apprehension of a stream of current research directions in
image understanding.Comment: 65 pages, 33 figures, 303 reference
Introduction to frames
This survey gives an introduction to redundant signal representations called frames. These representations have recently emerged as yet another powerful tool in the signal processing toolbox and have become popular through use in numerous applications. Our aim is to familiarize a general audience with the area, while at the same time giving a snapshot of the current state-of-the-art
- …