21,443 research outputs found
Spatiotemporal Sparse Bayesian Learning with Applications to Compressed Sensing of Multichannel Physiological Signals
Energy consumption is an important issue in continuous wireless
telemonitoring of physiological signals. Compressed sensing (CS) is a promising
framework to address it, due to its energy-efficient data compression
procedure. However, most CS algorithms have difficulty in data recovery due to
non-sparsity characteristic of many physiological signals. Block sparse
Bayesian learning (BSBL) is an effective approach to recover such signals with
satisfactory recovery quality. However, it is time-consuming in recovering
multichannel signals, since its computational load almost linearly increases
with the number of channels.
This work proposes a spatiotemporal sparse Bayesian learning algorithm to
recover multichannel signals simultaneously. It not only exploits temporal
correlation within each channel signal, but also exploits inter-channel
correlation among different channel signals. Furthermore, its computational
load is not significantly affected by the number of channels. The proposed
algorithm was applied to brain computer interface (BCI) and EEG-based driver's
drowsiness estimation. Results showed that the algorithm had both better
recovery performance and much higher speed than BSBL. Particularly, the
proposed algorithm ensured that the BCI classification and the drowsiness
estimation had little degradation even when data were compressed by 80%, making
it very suitable for continuous wireless telemonitoring of multichannel
signals.Comment: Codes are available at:
https://sites.google.com/site/researchbyzhang/stsb
Compressive Sampling for Remote Control Systems
In remote control, efficient compression or representation of control signals
is essential to send them through rate-limited channels. For this purpose, we
propose an approach of sparse control signal representation using the
compressive sampling technique. The problem of obtaining sparse representation
is formulated by cardinality-constrained L2 optimization of the control
performance, which is reducible to L1-L2 optimization. The low rate random
sampling employed in the proposed method based on the compressive sampling, in
addition to the fact that the L1-L2 optimization can be effectively solved by a
fast iteration method, enables us to generate the sparse control signal with
reduced computational complexity, which is preferable in remote control systems
where computation delays seriously degrade the performance. We give a
theoretical result for control performance analysis based on the notion of
restricted isometry property (RIP). An example is shown to illustrate the
effectiveness of the proposed approach via numerical experiments
The application of compressive sampling to radio astronomy I: Deconvolution
Compressive sampling is a new paradigm for sampling, based on sparseness of
signals or signal representations. It is much less restrictive than
Nyquist-Shannon sampling theory and thus explains and systematises the
widespread experience that methods such as the H\"ogbom CLEAN can violate the
Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution
method for extended sources is introduced. This method can reconstruct both
point sources and extended sources (using the isotropic undecimated wavelet
transform as a basis function for the reconstruction step). We compare this
CS-based deconvolution method with two CLEAN-based deconvolution methods: the
H\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best
performance in deconvolving extended sources for both uniform and natural
weighting of the sampled visibilities. Both visual and numerical results of the
comparison are provided.Comment: Published by A&A, Matlab code can be found:
http://code.google.com/p/csra/download
Xampling: Signal Acquisition and Processing in Union of Subspaces
We introduce Xampling, a unified framework for signal acquisition and
processing of signals in a union of subspaces. The main functions of this
framework are two. Analog compression that narrows down the input bandwidth
prior to sampling with commercial devices. A nonlinear algorithm then detects
the input subspace prior to conventional signal processing. A representative
union model of spectrally-sparse signals serves as a test-case to study these
Xampling functions. We adopt three metrics for the choice of analog
compression: robustness to model mismatch, required hardware accuracy and
software complexities. We conduct a comprehensive comparison between two
sub-Nyquist acquisition strategies for spectrally-sparse signals, the random
demodulator and the modulated wideband converter (MWC), in terms of these
metrics and draw operative conclusions regarding the choice of analog
compression. We then address lowrate signal processing and develop an algorithm
for that purpose that enables convenient signal processing at sub-Nyquist rates
from samples obtained by the MWC. We conclude by showing that a variety of
other sampling approaches for different union classes fit nicely into our
framework.Comment: 16 pages, 9 figures, submitted to IEEE for possible publicatio
- …