18,702 research outputs found

    Sparse sampling of signal innovations

    Get PDF
    Sparse sampling of continuous-time sparse signals is addressed. In particular, it is shown that sampling at the rate of innovation is possible, in some sense applying Occam's razor to the sampling of sparse signals. The noisy case is analyzed and solved, proposing methods reaching the optimal performance given by the Cramer-Rao bounds. Finally, a number of applications have been discussed where sparsity can be taken advantage of. The comprehensive coverage given in this article should lead to further research in sparse sampling, as well as new applications. One main application to use the theory presented in this article is ultra-wide band (UWB) communications

    Bayesian Estimation for Continuous-Time Sparse Stochastic Processes

    Full text link
    We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the â„“1\ell_1 norm and Log (â„“1\ell_1-â„“0\ell_0 relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.Comment: To appear in IEEE TS

    Joint recovery algorithms using difference of innovations for distributed compressed sensing

    Get PDF
    Distributed compressed sensing is concerned with representing an ensemble of jointly sparse signals using as few linear measurements as possible. Two novel joint reconstruction algorithms for distributed compressed sensing are presented in this paper. These algorithms are based on the idea of using one of the signals as side information; this allows to exploit joint sparsity in a more effective way with respect to existing schemes. They provide gains in reconstruction quality, especially when the nodes acquire few measurements, so that the system is able to operate with fewer measurements than is required by other existing schemes. We show that the algorithms achieve better performance with respect to the state-of-the-art.Comment: Conference Record of the Forty Seventh Asilomar Conference on Signals, Systems and Computers (ASILOMAR), 201

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Dynamics and sparsity in latent threshold factor models: A study in multivariate EEG signal processing

    Full text link
    We discuss Bayesian analysis of multivariate time series with dynamic factor models that exploit time-adaptive sparsity in model parametrizations via the latent threshold approach. One central focus is on the transfer responses of multiple interrelated series to underlying, dynamic latent factor processes. Structured priors on model hyper-parameters are key to the efficacy of dynamic latent thresholding, and MCMC-based computation enables model fitting and analysis. A detailed case study of electroencephalographic (EEG) data from experimental psychiatry highlights the use of latent threshold extensions of time-varying vector autoregressive and factor models. This study explores a class of dynamic transfer response factor models, extending prior Bayesian modeling of multiple EEG series and highlighting the practical utility of the latent thresholding concept in multivariate, non-stationary time series analysis.Comment: 27 pages, 13 figures, link to external web site for supplementary animated figure

    Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics

    Full text link
    This paper focuses on the problem of recursive nonlinear least squares parameter estimation in multi-agent networks, in which the individual agents observe sequentially over time an independent and identically distributed (i.i.d.) time-series consisting of a nonlinear function of the true but unknown parameter corrupted by noise. A distributed recursive estimator of the \emph{consensus} + \emph{innovations} type, namely CIWNLS\mathcal{CIWNLS}, is proposed, in which the agents update their parameter estimates at each observation sampling epoch in a collaborative way by simultaneously processing the latest locally sensed information~(\emph{innovations}) and the parameter estimates from other agents~(\emph{consensus}) in the local neighborhood conforming to a pre-specified inter-agent communication topology. Under rather weak conditions on the connectivity of the inter-agent communication and a \emph{global observability} criterion, it is shown that at every network agent, the proposed algorithm leads to consistent parameter estimates. Furthermore, under standard smoothness assumptions on the local observation functions, the distributed estimator is shown to yield order-optimal convergence rates, i.e., as far as the order of pathwise convergence is concerned, the local parameter estimates at each agent are as good as the optimal centralized nonlinear least squares estimator which would require access to all the observations across all the agents at all times. In order to benchmark the performance of the proposed distributed CIWNLS\mathcal{CIWNLS} estimator with that of the centralized nonlinear least squares estimator, the asymptotic normality of the estimate sequence is established and the asymptotic covariance of the distributed estimator is evaluated. Finally, simulation results are presented which illustrate and verify the analytical findings.Comment: 28 pages. Initial Submission: Feb. 2016, Revised: July 2016, Accepted: September 2016, To appear in IEEE Transactions on Signal and Information Processing over Networks: Special Issue on Inference and Learning over Network
    • …
    corecore