2,116 research outputs found

    Bayesian spectral modeling for multiple time series

    Get PDF
    We develop a novel Bayesian modeling approach to spectral density estimation for multiple time series. The log-periodogram distribution for each series is modeled as a mixture of Gaussian distributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functions with frequency-dependent weights. The mixture weights are built through successive differences of a logit-normal distribution function with frequency-dependent parameters. Building from the construction for a single spectral density, we develop a hierarchical extension for multiple time series. Specifically, we set the mean functions to be common to all spectral densities and make the weights specific to the time series through the parameters of the logit-normal distribution. In addition to accommodating flexible spectral density shapes, a practically important feature of the proposed formulation is that it allows for ready posterior simulation through a Gibbs sampler with closed form full conditional distributions for all model parameters. The modeling approach is illustrated with simulated datasets, and used for spectral analysis of multichannel electroencephalographic recordings (EEGs), which provides a key motivating application for the proposed methodology

    Multichannel Sampling of Pulse Streams at the Rate of Innovation

    Full text link
    We consider minimal-rate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on sub-Nyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimal-rate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches

    Spherical deconvolution of multichannel diffusion MRI data with non-Gaussian noise models and spatial regularization

    Get PDF
    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on the methodology used to combine multichannel signals. Indeed, the two prevailing methods for multichannel signal combination lead to Rician and noncentral Chi noise distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in brain data

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples

    Get PDF
    This paper presents a novel power spectral density estimation technique for band-limited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and incorporates the advantages of compressed sensing (CS) when the power spectrum is sparse, but applies to sparse and nonsparse power spectra alike. The estimates are consistent piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. We show that compressive estimates exhibit better tradeoffs among the estimator's resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. For suitable sampling patterns, noncompressive estimates are obtained as least squares solutions. Because of the non-negativity of power spectra, compressive estimates can be computed by seeking non-negative least squares solutions (provided appropriate sampling patterns exist) instead of using standard CS recovery algorithms. This flexibility suggests a reduction in computational overhead for systems estimating both sparse and nonsparse power spectra because one algorithm can be used to compute both compressive and noncompressive estimates.Comment: 26 pages, single spaced, 9 figure

    Multichannel sampling of finite rate of innovation signals

    No full text
    Recently there has been a surge of interest in sampling theory in signal processing community. New efficient sampling techniques have been developed that allow sampling and perfectly reconstructing some classes of non-bandlimited signals at sub-Nyquist rates. Depending on the setup used and reconstruction method involved, these schemes go under different names such as compressed sensing (CS), compressive sampling or sampling signals with finite rate of innovation (FRI). In this thesis we focus on the theory of sampling non-bandlimited signals with parametric structure or specifically signals with finite rate of innovation. Most of the theory on sampling FRI signals is based on a single acquisition device with one-dimensional (1-D) signals. In this thesis, we extend these results to the case of 2-D signals and multichannel acquisition systems. The essential issue in multichannel systems is that while each channel receives the input signal, it may introduce different unknown delays, gains or affine transformations which need to be estimated from the samples together with the signal itself. We pose both the calibration of the channels and the signal reconstruction stage as a parametric estimation problem and demonstrate that a simultaneous exact synchronization of the channels and reconstruction of the FRI signal is possible. Furthermore, because in practice perfect noise-free channels do not exist, we consider the case of noisy measurements and show that by considering Cramer-Rao bounds as well as numerical simulations, the multichannel systems are more resilient to noise than the single-channel ones. Finally, we consider the problem of system identification based on the multichannel and finite rate of innovation sampling techniques. First, by employing our multichannel sampling setup, we propose a novel algorithm for system identification problem with known input signal, that is for the case when both the input signal and the samples are known. Then we consider the problem of blind system identification and propose a novel algorithm for simultaneously estimating the input FRI signal and also the unknown system using an iterative algorithm

    Economical sampling of parametric signals

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 107-115).This thesis proposes architectures and algorithms for digital acquisition of parametric signals. It furthermore provides bounds for the performance of these systems in the presence of noise. Our simple acquisition circuitry and low sampling rate enable accurate parameter estimation to be achieved economically. In present practice, sampling and estimation are not integrated: the sampling device does not take advantage of the parametric model, and the estimation assumes that noise in the data is signal-independent additive white Gaussian noise. We focus on estimating the timing information in signals that are linear combinations of scales and shifts of a known pulse. This signal model is well-known in a variety of disciplines such as ultra-wideband signaling, neurobiology, etc. The signal is completely determined by the amplitudes and shifts of the summands. The delays determine a subspace that contains the signals, so estimating the shifts is equivalent to subspace estimation. By contrast, conventional sampling theory yields a least-squares approximation to a signal from a fixed shift-invariant subspace of possible reconstructions. Conventional acquisition takes samples at a rate higher than twice the signal bandwidth.(cont.) Although this may be feasible, there is a trade-off between power, accuracy, and speed. Under the signal model of interest, when the pulses are very narrow, the number of parameters per unit time-the rate of innovation-is much lower than the Fourier bandwidth. There is thus potential for much lower sampling rate so long as nonlinear reconstruction algorithms are used. We present a new sampling scheme that takes simultaneous samples at the outputs of multiple channels. This new scheme can be implemented with simple circuitry and has a successive approximation property that can be used to detect undermodeling. In many regimes our algorithms provide better timing accuracy and resolution than conventional systems. Our new analytical and algorithmic techniques are applied to previously proposed systems, and it is shown that all the systems considered have super-resolution properties. Finally, we consider the same parameter estimation problem when the sampling instances are perturbed by signal-independent timing noise. We give an iterative algorithm that achieves accurate timing estimation by exploiting knowledge of the pulse shape.by Julius Kusuma.Ph.D
    • …
    corecore