3,943 research outputs found
Compression via Compressive Sensing : A Low-Power Framework for the Telemonitoring of Multi-Channel Physiological Signals
Telehealth and wearable equipment can deliver personal healthcare and
necessary treatment remotely. One major challenge is transmitting large amount
of biosignals through wireless networks. The limited battery life calls for
low-power data compressors. Compressive Sensing (CS) has proved to be a
low-power compressor. In this study, we apply CS on the compression of
multichannel biosignals. We firstly develop an efficient CS algorithm from the
Block Sparse Bayesian Learning (BSBL) framework. It is based on a combination
of the block sparse model and multiple measurement vector model. Experiments on
real-life Fetal ECGs showed that the proposed algorithm has high fidelity and
efficiency. Implemented in hardware, the proposed algorithm was compared to a
Discrete Wavelet Transform (DWT) based algorithm, verifying the proposed one
has low power consumption and occupies less computational resources.Comment: 2013 International Workshop on Biomedical and Health Informatic
Information Theoretic Limits for Standard and One-Bit Compressed Sensing with Graph-Structured Sparsity
In this paper, we analyze the information theoretic lower bound on the
necessary number of samples needed for recovering a sparse signal under
different compressed sensing settings. We focus on the weighted graph model, a
model-based framework proposed by Hegde et al. (2015), for standard compressed
sensing as well as for one-bit compressed sensing. We study both the noisy and
noiseless regimes. Our analysis is general in the sense that it applies to any
algorithm used to recover the signal. We carefully construct restricted
ensembles for different settings and then apply Fano's inequality to establish
the lower bound on the necessary number of samples. Furthermore, we show that
our bound is tight for one-bit compressed sensing, while for standard
compressed sensing, our bound is tight up to a logarithmic factor of the number
of non-zero entries in the signal
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …