36,769 research outputs found
Dynamic Iterative Pursuit
For compressive sensing of dynamic sparse signals, we develop an iterative
pursuit algorithm. A dynamic sparse signal process is characterized by varying
sparsity patterns over time/space. For such signals, the developed algorithm is
able to incorporate sequential predictions, thereby providing better
compressive sensing recovery performance, but not at the cost of high
complexity. Through experimental evaluations, we observe that the new algorithm
exhibits a graceful degradation at deteriorating signal conditions while
capable of yielding substantial performance gains as conditions improve.Comment: 6 pages, 7 figures. Accepted for publication in IEEE Transactions on
Signal Processin
Unveiling The Tree: A Convex Framework for Sparse Problems
This paper presents a general framework for generating greedy algorithms for
solving convex constraint satisfaction problems for sparse solutions by mapping
the satisfaction problem into one of graph traversal on a rooted tree of
unknown topology. For every pre-walk of the tree an initial set of generally
dense feasible solutions is processed in such a way that the sparsity of each
solution increases with each generation unveiled. The specific computation
performed at any particular child node is shown to correspond to an embedding
of a polytope into the polytope received from that nodes parent. Several issues
related to pre-walk order selection, computational complexity and tractability,
and the use of heuristic and/or side information is discussed. An example of a
single-path, depth-first algorithm on a tree with randomized vertex reduction
and a run-time path selection algorithm is presented in the context of sparse
lowpass filter design
Deep Networks for Compressed Image Sensing
The compressed sensing (CS) theory has been successfully applied to image
compression in the past few years as most image signals are sparse in a certain
domain. Several CS reconstruction models have been recently proposed and
obtained superior performance. However, there still exist two important
challenges within the CS theory. The first one is how to design a sampling
mechanism to achieve an optimal sampling efficiency, and the second one is how
to perform the reconstruction to get the highest quality to achieve an optimal
signal recovery. In this paper, we try to deal with these two problems with a
deep network. First of all, we train a sampling matrix via the network training
instead of using a traditional manually designed one, which is much appropriate
for our deep network based reconstruct process. Then, we propose a deep network
to recover the image, which imitates traditional compressed sensing
reconstruction processes. Experimental results demonstrate that our deep
networks based CS reconstruction method offers a very significant quality
improvement compared against state of the art ones.Comment: This paper has been accepted by the IEEE International Conference on
Multimedia and Expo (ICME) 201
Model-Based Calibration of Filter Imperfections in the Random Demodulator for Compressive Sensing
The random demodulator is a recent compressive sensing architecture providing
efficient sub-Nyquist sampling of sparse band-limited signals. The compressive
sensing paradigm requires an accurate model of the analog front-end to enable
correct signal reconstruction in the digital domain. In practice, hardware
devices such as filters deviate from their desired design behavior due to
component variations. Existing reconstruction algorithms are sensitive to such
deviations, which fall into the more general category of measurement matrix
perturbations. This paper proposes a model-based technique that aims to
calibrate filter model mismatches to facilitate improved signal reconstruction
quality. The mismatch is considered to be an additive error in the discretized
impulse response. We identify the error by sampling a known calibrating signal,
enabling least-squares estimation of the impulse response error. The error
estimate and the known system model are used to calibrate the measurement
matrix. Numerical analysis demonstrates the effectiveness of the calibration
method even for highly deviating low-pass filter responses. The proposed method
performance is also compared to a state of the art method based on discrete
Fourier transform trigonometric interpolation.Comment: 10 pages, 8 figures, submitted to IEEE Transactions on Signal
Processin
Role of homeostasis in learning sparse representations
Neurons in the input layer of primary visual cortex in primates develop
edge-like receptive fields. One approach to understanding the emergence of this
response is to state that neural activity has to efficiently represent sensory
data with respect to the statistics of natural scenes. Furthermore, it is
believed that such an efficient coding is achieved using a competition across
neurons so as to generate a sparse representation, that is, where a relatively
small number of neurons are simultaneously active. Indeed, different models of
sparse coding, coupled with Hebbian learning and homeostasis, have been
proposed that successfully match the observed emergent response. However, the
specific role of homeostasis in learning such sparse representations is still
largely unknown. By quantitatively assessing the efficiency of the neural
representation during learning, we derive a cooperative homeostasis mechanism
that optimally tunes the competition between neurons within the sparse coding
algorithm. We apply this homeostasis while learning small patches taken from
natural images and compare its efficiency with state-of-the-art algorithms.
Results show that while different sparse coding algorithms give similar coding
results, the homeostasis provides an optimal balance for the representation of
natural images within the population of neurons. Competition in sparse coding
is optimized when it is fair. By contributing to optimizing statistical
competition across neurons, homeostasis is crucial in providing a more
efficient solution to the emergence of independent components
- …