22 research outputs found
Spatially Directional Predictive Coding for Block-based Compressive Sensing of Natural Images
A novel coding strategy for block-based compressive sens-ing named spatially
directional predictive coding (SDPC) is proposed, which efficiently utilizes
the intrinsic spatial cor-relation of natural images. At the encoder, for each
block of compressive sensing (CS) measurements, the optimal pre-diction is
selected from a set of prediction candidates that are generated by four
designed directional predictive modes. Then, the resulting residual is
processed by scalar quantiza-tion (SQ). At the decoder, the same prediction is
added onto the de-quantized residuals to produce the quantized CS measurements,
which is exploited for CS reconstruction. Experimental results substantiate
significant improvements achieved by SDPC-plus-SQ in rate distortion
performance as compared with SQ alone and DPCM-plus-SQ.Comment: 5 pages, 3 tables, 3 figures, published at IEEE International
Conference on Image Processing (ICIP) 2013 Code Avaiable:
http://idm.pku.edu.cn/staff/zhangjian/SDPC
Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing
This paper focuses on the estimation of low-complexity signals when they are
observed through uniformly quantized compressive observations. Among such
signals, we consider 1-D sparse vectors, low-rank matrices, or compressible
signals that are well approximated by one of these two models. In this context,
we prove the estimation efficiency of a variant of Basis Pursuit Denoise,
called Consistent Basis Pursuit (CoBP), enforcing consistency between the
observations and the re-observed estimate, while promoting its low-complexity
nature. We show that the reconstruction error of CoBP decays like
when all parameters but are fixed. Our proof is connected to recent bounds
on the proximity of vectors or matrices when (i) those belong to a set of small
intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they
share the same quantized (dithered) random projections. By solving CoBP with a
proximal algorithm, we provide some extensive numerical observations that
confirm the theoretical bound as is increased, displaying even faster error
decay than predicted. The same phenomenon is observed in the special, yet
important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency,
error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this
version: title change, typo corrections, clarification of the context, adding
a comparison with BPD
Quantized Compressed Sensing for Partial Random Circulant Matrices
We provide the first analysis of a non-trivial quantization scheme for
compressed sensing measurements arising from structured measurements.
Specifically, our analysis studies compressed sensing matrices consisting of
rows selected at random, without replacement, from a circulant matrix generated
by a random subgaussian vector. We quantize the measurements using stable,
possibly one-bit, Sigma-Delta schemes, and use a reconstruction method based on
convex optimization. We show that the part of the reconstruction error due to
quantization decays polynomially in the number of measurements. This is in line
with analogous results on Sigma-Delta quantization associated with random
Gaussian or subgaussian matrices, and significantly better than results
associated with the widely assumed memoryless scalar quantization. Moreover, we
prove that our approach is stable and robust; i.e., the reconstruction error
degrades gracefully in the presence of non-quantization noise and when the
underlying signal is not strictly sparse. The analysis relies on results
concerning subgaussian chaos processes as well as a variation of McDiarmid's
inequality.Comment: 15 page
One-bit compressed sensing by linear programming
We give the first computationally tractable and almost optimal solution to
the problem of one-bit compressed sensing, showing how to accurately recover an
s-sparse vector x in R^n from the signs of O(s log^2(n/s)) random linear
measurements of x. The recovery is achieved by a simple linear program. This
result extends to approximately sparse vectors x. Our result is universal in
the sense that with high probability, one measurement scheme will successfully
recover all sparse vectors simultaneously. The argument is based on solving an
equivalent geometric problem on random hyperplane tessellations.Comment: 15 pages, 1 figure, to appear in CPAM. Small changes based on referee
comment
Distributed Functional Scalar Quantization Simplified
Distributed functional scalar quantization (DFSQ) theory provides optimality
conditions and predicts performance of data acquisition systems in which a
computation on acquired data is desired. We address two limitations of previous
works: prohibitively expensive decoder design and a restriction to sources with
bounded distributions. We rigorously show that a much simpler decoder has
equivalent asymptotic performance as the conditional expectation estimator
previously explored, thus reducing decoder design complexity. The simpler
decoder has the feature of decoupled communication and computation blocks.
Moreover, we extend the DFSQ framework with the simpler decoder to acquire
sources with infinite-support distributions such as Gaussian or exponential
distributions. Finally, through simulation results we demonstrate that
performance at moderate coding rates is well predicted by the asymptotic
analysis, and we give new insight on the rate of convergence
Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements
This paper addresses the problem of distributed coding of images whose
correlation is driven by the motion of objects or positioning of the vision
sensors. It concentrates on the problem where images are encoded with
compressed linear measurements. We propose a geometry-based correlation model
in order to describe the common information in pairs of images. We assume that
the constitutive components of natural images can be captured by visual
features that undergo local transformations (e.g., translation) in different
images. We first identify prominent visual features by computing a sparse
approximation of a reference image with a dictionary of geometric basis
functions. We then pose a regularized optimization problem to estimate the
corresponding features in correlated images given by quantized linear
measurements. The estimated features have to comply with the compressed
information and to represent consistent transformation between images. The
correlation model is given by the relative geometric transformations between
corresponding features. We then propose an efficient joint decoding algorithm
that estimates the compressed images such that they stay consistent with both
the quantized measurements and the correlation model. Experimental results show
that the proposed algorithm effectively estimates the correlation between
images in multi-view datasets. In addition, the proposed algorithm provides
effective decoding performance that compares advantageously to independent
coding solutions as well as state-of-the-art distributed coding schemes based
on disparity learning