522 research outputs found
Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing
This paper focuses on the estimation of low-complexity signals when they are
observed through uniformly quantized compressive observations. Among such
signals, we consider 1-D sparse vectors, low-rank matrices, or compressible
signals that are well approximated by one of these two models. In this context,
we prove the estimation efficiency of a variant of Basis Pursuit Denoise,
called Consistent Basis Pursuit (CoBP), enforcing consistency between the
observations and the re-observed estimate, while promoting its low-complexity
nature. We show that the reconstruction error of CoBP decays like
when all parameters but are fixed. Our proof is connected to recent bounds
on the proximity of vectors or matrices when (i) those belong to a set of small
intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they
share the same quantized (dithered) random projections. By solving CoBP with a
proximal algorithm, we provide some extensive numerical observations that
confirm the theoretical bound as is increased, displaying even faster error
decay than predicted. The same phenomenon is observed in the special, yet
important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency,
error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this
version: title change, typo corrections, clarification of the context, adding
a comparison with BPD
Small Width, Low Distortions: Quantized Random Embeddings of Low-complexity Sets
Under which conditions and with which distortions can we preserve the
pairwise-distances of low-complexity vectors, e.g., for structured sets such as
the set of sparse vectors or the one of low-rank matrices, when these are
mapped in a finite set of vectors? This work addresses this general question
through the specific use of a quantized and dithered random linear mapping
which combines, in the following order, a sub-Gaussian random projection in
of vectors in , a random translation, or "dither",
of the projected vectors and a uniform scalar quantizer of resolution
applied componentwise. Thanks to this quantized mapping we are first
able to show that, with high probability, an embedding of a bounded set
in can be achieved when
distances in the quantized and in the original domains are measured with the
- and -norm, respectively, and provided the number of quantized
observations is large before the square of the "Gaussian mean width" of
. In this case, we show that the embedding is actually
"quasi-isometric" and only suffers of both multiplicative and additive
distortions whose magnitudes decrease as for general sets, and as
for structured set, when increases. Second, when one is only
interested in characterizing the maximal distance separating two elements of
mapped to the same quantized vector, i.e., the "consistency width"
of the mapping, we show that for a similar number of measurements and with high
probability this width decays as for general sets and as for
structured ones when increases. Finally, as an important aspect of our
work, we also establish how the non-Gaussianity of the mapping impacts the
class of vectors that can be embedded or whose consistency width provably
decays when increases.Comment: Keywords: quantization, restricted isometry property, compressed
sensing, dimensionality reduction. 31 pages, 1 figur
Quantization and Compressive Sensing
Quantization is an essential step in digitizing signals, and, therefore, an
indispensable component of any modern acquisition system. This book chapter
explores the interaction of quantization and compressive sensing and examines
practical quantization strategies for compressive acquisition systems.
Specifically, we first provide a brief overview of quantization and examine
fundamental performance bounds applicable to any quantization approach. Next,
we consider several forms of scalar quantizers, namely uniform, non-uniform,
and 1-bit. We provide performance bounds and fundamental analysis, as well as
practical quantizer designs and reconstruction algorithms that account for
quantization. Furthermore, we provide an overview of Sigma-Delta
() quantization in the compressed sensing context, and also
discuss implementation issues, recovery algorithms and performance bounds. As
we demonstrate, proper accounting for quantization and careful quantizer design
has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing
and Its Applications", 201
Time for dithering: fast and quantized random embeddings via the restricted isometry property
Recently, many works have focused on the characterization of non-linear
dimensionality reduction methods obtained by quantizing linear embeddings,
e.g., to reach fast processing time, efficient data compression procedures,
novel geometry-preserving embeddings or to estimate the information/bits stored
in this reduced data representation. In this work, we prove that many linear
maps known to respect the restricted isometry property (RIP) can induce a
quantized random embedding with controllable multiplicative and additive
distortions with respect to the pairwise distances of the data points beings
considered. In other words, linear matrices having fast matrix-vector
multiplication algorithms (e.g., based on partial Fourier ensembles or on the
adjacency matrix of unbalanced expanders) can be readily used in the definition
of fast quantized embeddings with small distortions. This implication is made
possible by applying right after the linear map an additive and random "dither"
that stabilizes the impact of the uniform scalar quantization operator applied
afterwards. For different categories of RIP matrices, i.e., for different
linear embeddings of a metric space
in with , we derive upper bounds on the
additive distortion induced by quantization, showing that it decays either when
the embedding dimension increases or when the distance of a pair of
embedded vectors in decreases. Finally, we develop a novel
"bi-dithered" quantization scheme, which allows for a reduced distortion that
decreases when the embedding dimension grows and independently of the
considered pair of vectors.Comment: Keywords: random projections, non-linear embeddings, quantization,
dither, restricted isometry property, dimensionality reduction, compressive
sensing, low-complexity signal models, fast and structured sensing matrices,
quantized rank-one projections (31 pages
Quantized Compressive Sensing with RIP Matrices: The Benefit of Dithering
Quantized compressive sensing (QCS) deals with the problem of coding
compressive measurements of low-complexity signals with quantized, finite
precision representations, i.e., a mandatory process involved in any practical
sensing model. While the resolution of this quantization clearly impacts the
quality of signal reconstruction, there actually exist incompatible
combinations of quantization functions and sensing matrices that proscribe
arbitrarily low reconstruction error when the number of measurements increases.
This work shows that a large class of random matrix constructions known to
respect the restricted isometry property (RIP) is "compatible" with a simple
scalar and uniform quantization if a uniform random vector, or a random dither,
is added to the compressive signal measurements before quantization. In the
context of estimating low-complexity signals (e.g., sparse or compressible
signals, low-rank matrices) from their quantized observations, this
compatibility is demonstrated by the existence of (at least) one signal
reconstruction method, the projected back projection (PBP), whose
reconstruction error decays when the number of measurements increases.
Interestingly, given one RIP matrix and a single realization of the dither, a
small reconstruction error can be proved to hold uniformly for all signals in
the considered low-complexity set. We confirm these observations numerically in
several scenarios involving sparse signals, low-rank matrices, and compressible
signals, with various RIP matrix constructions such as sub-Gaussian random
matrices and random partial discrete cosine transform (DCT) matrices.Comment: 42 pages, 9 figures. Diff. btw V3 & V2: better paper structure, new
concepts (e.g., RIP matrix distribution, connections with Bussgang's
theorem), as well as many clarifications and correction
Adapted Compressed Sensing: A Game Worth Playing
Despite the universal nature of the compressed sensing mechanism, additional information on the class of sparse signals to acquire allows adjustments that yield substantial improvements. In facts, proper exploitation of these priors allows to significantly increase compression for a given reconstruction quality. Since one of the most promising scopes of application of compressed sensing is that of IoT devices subject to extremely low resource constraint, adaptation is especially interesting when it can cope with hardware-related constraint allowing low complexity implementations. We here review and compare many algorithmic adaptation policies that focus either on the encoding part or on the recovery part of compressed sensing. We also review other more hardware-oriented adaptation techniques that are actually able to make the difference when coming to real-world implementations. In all cases, adaptation proves to be a tool that should be mastered in practical applications to unleash the full potential of compressed sensing
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Model-free reconstruction of neuronal network connectivity from calcium imaging signals
A systematic assessment of global neural network connectivity through direct
electrophysiological assays has remained technically unfeasible even in
dissociated neuronal cultures. We introduce an improved algorithmic approach
based on Transfer Entropy to reconstruct approximations to network structural
connectivities from network activity monitored through calcium fluorescence
imaging. Based on information theory, our method requires no prior assumptions
on the statistics of neuronal firing and neuronal connections. The performance
of our algorithm is benchmarked on surrogate time-series of calcium
fluorescence generated by the simulated dynamics of a network with known
ground-truth topology. We find that the effective network topology revealed by
Transfer Entropy depends qualitatively on the time-dependent dynamic state of
the network (e.g., bursting or non-bursting). We thus demonstrate how
conditioning with respect to the global mean activity improves the performance
of our method. [...] Compared to other reconstruction strategies such as
cross-correlation or Granger Causality methods, our method based on improved
Transfer Entropy is remarkably more accurate. In particular, it provides a good
reconstruction of the network clustering coefficient, allowing to discriminate
between weakly or strongly clustered topologies, whereas on the other hand an
approach based on cross-correlations would invariantly detect artificially high
levels of clustering. Finally, we present the applicability of our method to
real recordings of in vitro cortical cultures. We demonstrate that these
networks are characterized by an elevated level of clustering compared to a
random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted
for publicatio
- …