3,797 research outputs found
Deterministic Constructions of Binary Measurement Matrices from Finite Geometry
Deterministic constructions of measurement matrices in compressed sensing
(CS) are considered in this paper. The constructions are inspired by the recent
discovery of Dimakis, Smarandache and Vontobel which says that parity-check
matrices of good low-density parity-check (LDPC) codes can be used as
{provably} good measurement matrices for compressed sensing under
-minimization. The performance of the proposed binary measurement
matrices is mainly theoretically analyzed with the help of the analyzing
methods and results from (finite geometry) LDPC codes. Particularly, several
lower bounds of the spark (i.e., the smallest number of columns that are
linearly dependent, which totally characterizes the recovery performance of
-minimization) of general binary matrices and finite geometry matrices
are obtained and they improve the previously known results in most cases.
Simulation results show that the proposed matrices perform comparably to,
sometimes even better than, the corresponding Gaussian random matrices.
Moreover, the proposed matrices are sparse, binary, and most of them have
cyclic or quasi-cyclic structure, which will make the hardware realization
convenient and easy.Comment: 12 pages, 11 figure
A single-photon sampling architecture for solid-state imaging
Advances in solid-state technology have enabled the development of silicon
photomultiplier sensor arrays capable of sensing individual photons. Combined
with high-frequency time-to-digital converters (TDCs), this technology opens up
the prospect of sensors capable of recording with high accuracy both the time
and location of each detected photon. Such a capability could lead to
significant improvements in imaging accuracy, especially for applications
operating with low photon fluxes such as LiDAR and positron emission
tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs
between fill factor and spatio-temporal resolution, causing many contemporary
designs to severely underutilize the technology's full potential. Concentrating
on the low photon flux setting, this paper leverages results from group testing
and proposes an architecture for a highly efficient readout of pixels using
only a small number of TDCs, thereby also reducing both cost and power
consumption. The design relies on a multiplexing technique based on binary
interconnection matrices. We provide optimized instances of these matrices for
various sensor parameters and give explicit upper and lower bounds on the
number of TDCs required to uniquely decode a given maximum number of
simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical
digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with
a 40ps time resolution and an estimated fill factor of approximately 70%, using
only 161 TDCs. The design guarantees registration and unique recovery of up to
4 simultaneous photon arrivals using a fast decoding algorithm. In a series of
realistic simulations of scintillation events in clinical positron emission
tomography the design was able to recover the spatio-temporal location of 98.6%
of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Deterministic Construction of Binary, Bipolar and Ternary Compressed Sensing Matrices
In this paper we establish the connection between the Orthogonal Optical
Codes (OOC) and binary compressed sensing matrices. We also introduce
deterministic bipolar RIP fulfilling matrices of order
such that . The columns of these matrices are binary BCH code vectors where the
zeros are replaced by -1. Since the RIP is established by means of coherence,
the simple greedy algorithms such as Matching Pursuit are able to recover the
sparse solution from the noiseless samples. Due to the cyclic property of the
BCH codes, we show that the FFT algorithm can be employed in the reconstruction
methods to considerably reduce the computational complexity. In addition, we
combine the binary and bipolar matrices to form ternary sensing matrices
( elements) that satisfy the RIP condition.Comment: The paper is accepted for publication in IEEE Transaction on
Information Theor
Composition of Binary Compressed Sensing Matrices
In the recent past, various methods have been proposed to construct deterministic compressed sensing (CS) matrices. Of interest has been the construction of binary sensing matrices as they are useful for multiplierless and faster dimensionality reduction. In most of these binary constructions, the matrix size depends on primes or their powers. In this study, we propose a composition rule which exploits sparsity and block structure of existing binary CS matrices to construct matrices of general size. We also show that these matrices satisfy optimal theoretical guarantees and have similar density compared to matrices obtained using Kronecker product. Simulation work shows that the synthesized matrices provide comparable results against Gaussian random matrices
Convolutional compressed sensing using deterministic sequences
This is the author's accepted manuscript (with working title "Semi-universal convolutional compressed sensing using (nearly) perfect sequences"). The final published article is available from the link below. Copyright @ 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In this paper, a new class of orthogonal circulant matrices built from deterministic sequences is proposed for convolution-based compressed sensing (CS). In contrast to random convolution, the coefficients of the underlying filter are given by the discrete Fourier transform of a deterministic sequence with good autocorrelation. Both uniform recovery and non-uniform recovery of sparse signals are investigated, based on the coherence parameter of the proposed sensing matrices. Many examples of the sequences are investigated, particularly the Frank-Zadoff-Chu (FZC) sequence, the m-sequence and the Golay sequence. A salient feature of the proposed sensing matrices is that they can not only handle sparse signals in the time domain, but also those in the frequency and/or or discrete-cosine transform (DCT) domain
Potential of quantum finite automata with exact acceptance
The potential of the exact quantum information processing is an interesting,
important and intriguing issue. For examples, it has been believed that quantum
tools can provide significant, that is larger than polynomial, advantages in
the case of exact quantum computation only, or mainly, for problems with very
special structures. We will show that this is not the case.
In this paper the potential of quantum finite automata producing outcomes not
only with a (high) probability, but with certainty (so called exactly) is
explored in the context of their uses for solving promise problems and with
respect to the size of automata. It is shown that for solving particular
classes of promise problems, even those without some
very special structure, that succinctness of the exact quantum finite automata
under consideration, with respect to the number of (basis) states, can be very
small (and constant) though it grows proportional to in the case
deterministic finite automata (DFAs) of the same power are used. This is here
demonstrated also for the case that the component languages of the promise
problems solvable by DFAs are non-regular. The method used can be applied in
finding more exact quantum finite automata or quantum algorithms for other
promise problems.Comment: We have improved the presentation of the paper. Accepted to
International Journal of Foundation of Computer Scienc
- …