595 research outputs found
Feedback Acquisition and Reconstruction of Spectrum-Sparse Signals by Predictive Level Comparisons
In this letter, we propose a sparsity promoting feedback acquisition and
reconstruction scheme for sensing, encoding and subsequent reconstruction of
spectrally sparse signals. In the proposed scheme, the spectral components are
estimated utilizing a sparsity-promoting, sliding-window algorithm in a
feedback loop. Utilizing the estimated spectral components, a level signal is
predicted and sign measurements of the prediction error are acquired. The
sparsity promoting algorithm can then estimate the spectral components
iteratively from the sign measurements. Unlike many batch-based Compressive
Sensing (CS) algorithms, our proposed algorithm gradually estimates and follows
slow changes in the sparse components utilizing a sliding-window technique. We
also consider the scenario in which possible flipping errors in the sign bits
propagate along iterations (due to the feedback loop) during reconstruction. We
propose an iterative error correction algorithm to cope with this error
propagation phenomenon considering a binary-sparse occurrence model on the
error sequence. Simulation results show effective performance of the proposed
scheme in comparison with the literature
Variational Bayesian algorithm for quantized compressed sensing
Compressed sensing (CS) is on recovery of high dimensional signals from their
low dimensional linear measurements under a sparsity prior and digital
quantization of the measurement data is inevitable in practical implementation
of CS algorithms. In the existing literature, the quantization error is modeled
typically as additive noise and the multi-bit and 1-bit quantized CS problems
are dealt with separately using different treatments and procedures. In this
paper, a novel variational Bayesian inference based CS algorithm is presented,
which unifies the multi- and 1-bit CS processing and is applicable to various
cases of noiseless/noisy environment and unsaturated/saturated quantizer. By
decoupling the quantization error from the measurement noise, the quantization
error is modeled as a random variable and estimated jointly with the signal
being recovered. Such a novel characterization of the quantization error
results in superior performance of the algorithm which is demonstrated by
extensive simulations in comparison with state-of-the-art methods for both
multi-bit and 1-bit CS problems.Comment: Accepted by IEEE Trans. Signal Processing. 10 pages, 6 figure
Dictionary Learning for Blind One Bit Compressed Sensing
This letter proposes a dictionary learning algorithm for blind one bit
compressed sensing. In the blind one bit compressed sensing framework, the
original signal to be reconstructed from one bit linear random measurements is
sparse in an unknown domain. In this context, the multiplication of measurement
matrix \Ab and sparse domain matrix , \ie \Db=\Ab\Phi, should be
learned. Hence, we use dictionary learning to train this matrix. Towards that
end, an appropriate continuous convex cost function is suggested for one bit
compressed sensing and a simple steepest-descent method is exploited to learn
the rows of the matrix \Db. Experimental results show the effectiveness of
the proposed algorithm against the case of no dictionary learning, specially
with increasing the number of training signals and the number of sign
measurements.Comment: 5 pages, 3 figure
One-bit Compressed Sensing in the Presence of Noise
Many modern real-world systems generate large amounts of high-dimensional data stressing the available computing and signal processing systems. In resource-constrained settings, it is desirable to process, store and transmit as little amount of data as possible. It has been shown that one can obtain acceptable performance for tasks such as inference and reconstruction using fewer bits of data by exploiting low-dimensional structures on data such as sparsity. This dissertation investigates the signal acquisition paradigm known as one-bit compressed sensing (one-bit CS) for signal reconstruction and parameter estimation.
We first consider the problem of joint sparse support estimation with one-bit measurements in a distributed setting. Each node observes sparse signals with the same but unknown support. The goal is to minimize the probability of error of support estimation. First, we study the performance of maximum likelihood (ML) estimation of the support set from one-bit compressed measurements when all these measurements are available at the fusion center. We provide a lower bound on the number of one-bit measurements required per node for vanishing probability of error. Though the ML estimator is optimal, its computational complexity increases exponentially with the signal dimension. So, we propose computationally tractable algorithms in a centralized setting. Further, we extend these algorithms to a decentralized setting where each node can communicate only with its one-hop neighbors. The proposed method shows excellent estimation performance even in the presence of noise.
In the second part of the dissertation, we investigate the problem of sparse signal reconstruction from noisy one-bit compressed measurements using a signal that is statistically dependent on the compressed signal as an aid. We refer to this signal as side-information. We consider a generalized measurement model of one-bit CS where noise is assumed to be added at two stages of the measurement process- a) before quantizationand b) after quantization. We model the noise before quantization as additive white Gaussian noise and the noise after quantization as a sign-flip noise generated from a Bernoulli distribution. We assume that the SI at the receiver is noisy. The noise in the SI can be either in the support or in the amplitude, or both. This nature of the noise in SI suggests that the noise has a sparse structure. We use additive independent and identically distributed Laplacian noise to model such sparse nature of the noise. In this setup, we develop tractable algorithms that approximate the minimum mean square error (MMSE) estimator of the signal. We consider the following three different SI-based scenarios:
1. The side-information is assumed to be a noisy version of the signal. The noise is independent of the signal and follows the Laplacian distribution. We do not assume any temporal dependence in the signal.2. The signal exhibits temporal dependencies between signals at the current time instant and the previous time instant. The temporal dependence is modeled using the birth-death-drift (BDD) model. The side-information is a noisy version of the previous time instant signal, which is statistically dependent on the signal as defined by the BDD model. 3. The SI available at the receiver is heterogeneous. The signal and side-information are from different modalities and may not share joint sparse representation. We assume that the SI and the sparse signal are dependent and use the Copula function to model the dependence. In each of these scenarios, we develop generalized approximate message passing-based algorithms to approximate the minimum mean square error estimate. Numerical results show the effectiveness of the proposed algorithm.
In the final part of the dissertation, we propose two one-bit compressed sensing reconstruction algorithms that use a deep neural network as a prior on the signal. In the first algorithm, we use a trained Generative model such as Generative Adversarial Networks and Variational Autoencoders as a prior. This trained network is used to reconstruct the compressed signal from one-bit measurements by searching over its range. We provide theoretical guarantees on the reconstruction accuracy and sample complexity of the presented algorithm. In the second algorithm, we investigate an untrained neural network architecture so that it acts as a good prior on natural signals such as images and audio. We formulate an optimization problem to reconstruct the signal from one-bit measurements using this untrained network. We demonstrate the superior performance of the proposed algorithms through numerical results. Further, in contrast to competing model-based algorithms, we demonstrate that the proposed algorithms estimate both direction and magnitude of the compressed signal from one-bit measurements
Expectation propagation on the diluted Bayesian classifier
Efficient feature selection from high-dimensional datasets is a very
important challenge in many data-driven fields of science and engineering. We
introduce a statistical mechanics inspired strategy that addresses the problem
of sparse feature selection in the context of binary classification by
leveraging a computational scheme known as expectation propagation (EP). The
algorithm is used in order to train a continuous-weights perceptron learning a
classification rule from a set of (possibly partly mislabeled) examples
provided by a teacher perceptron with diluted continuous weights. We test the
method in the Bayes optimal setting under a variety of conditions and compare
it to other state-of-the-art algorithms based on message passing and on
expectation maximization approximate inference schemes. Overall, our
simulations show that EP is a robust and competitive algorithm in terms of
variable selection properties, estimation accuracy and computational
complexity, especially when the student perceptron is trained from correlated
patterns that prevent other iterative methods from converging. Furthermore, our
numerical tests demonstrate that the algorithm is capable of learning online
the unknown values of prior parameters, such as the dilution level of the
weights of the teacher perceptron and the fraction of mislabeled examples,
quite accurately. This is achieved by means of a simple maximum likelihood
strategy that consists in minimizing the free energy associated with the EP
algorithm.Comment: 24 pages, 6 figure
- …