2,830 research outputs found

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Xampling in Ultrasound Imaging

    Full text link
    Recent developments of new medical treatment techniques put challenging demands on ultrasound imaging systems in terms of both image quality and raw data size. Traditional sampling methods result in very large amounts of data, thus, increasing demands on processing hardware and limiting the exibility in the post-processing stages. In this paper, we apply Compressed Sensing (CS) techniques to analog ultrasound signals, following the recently developed Xampling framework. The result is a system with significantly reduced sampling rates which, in turn, means significantly reduced data size while maintaining the quality of the resulting images.Comment: 17 pages, 9 Figures. Introduced in SPIE Medical Imaging Conference, Orlando Florida, 201

    Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging

    Full text link
    Signals comprised of a stream of short pulses appear in many applications including bio-imaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams, and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques, and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.Comment: 14 pages, 13 figure

    Sampling and Reconstruction of Shapes with Algebraic Boundaries

    Get PDF
    We present a sampling theory for a class of binary images with finite rate of innovation (FRI). Every image in our model is the restriction of \mathds{1}_{\{p\leq0\}} to the image plane, where \mathds{1} denotes the indicator function and pp is some real bivariate polynomial. This particularly means that the boundaries in the image form a subset of an algebraic curve with the implicit polynomial pp. We show that the image parameters --i.e., the polynomial coefficients-- satisfy a set of linear annihilation equations with the coefficients being the image moments. The inherent sensitivity of the moments to noise makes the reconstruction process numerically unstable and narrows the choice of the sampling kernels to polynomial reproducing kernels. As a remedy to these problems, we replace conventional moments with more stable \emph{generalized moments} that are adjusted to the given sampling kernel. The benefits are threefold: (1) it relaxes the requirements on the sampling kernels, (2) produces annihilation equations that are robust at numerical precision, and (3) extends the results to images with unbounded boundaries. We further reduce the sensitivity of the reconstruction process to noise by taking into account the sign of the polynomial at certain points, and sequentially enforcing measurement consistency. We consider various numerical experiments to demonstrate the performance of our algorithm in reconstructing binary images, including low to moderate noise levels and a range of realistic sampling kernels.Comment: 12 pages, 14 figure

    Time Delay Estimation from Low Rate Samples: A Union of Subspaces Approach

    Full text link
    Time delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Various approaches have been proposed in the literature to identify time delays introduced by multipath environments. However, these methods either operate on the analog received signal, or require high sampling rates in order to achieve reasonable time resolution. In this paper, our goal is to develop a unified approach to time delay estimation from low rate samples of the output of a multipath channel. Our methods result in perfect recovery of the multipath delays from samples of the channel output at the lowest possible rate, even in the presence of overlapping transmitted pulses. This rate depends only on the number of multipath components and the transmission rate, but not on the bandwidth of the probing signal. In addition, our development allows for a variety of different sampling methods. By properly manipulating the low-rate samples, we show that the time delays can be recovered using the well-known ESPRIT algorithm. Combining results from sampling theory with those obtained in the context of direction of arrival estimation methods, we develop necessary and sufficient conditions on the transmitted pulse and the sampling functions in order to ensure perfect recovery of the channel parameters at the minimal possible rate. Our results can be viewed in a broader context, as a sampling theorem for analog signals defined over an infinite union of subspaces

    Multichannel sampling of finite rate of innovation signals

    No full text
    Recently there has been a surge of interest in sampling theory in signal processing community. New efficient sampling techniques have been developed that allow sampling and perfectly reconstructing some classes of non-bandlimited signals at sub-Nyquist rates. Depending on the setup used and reconstruction method involved, these schemes go under different names such as compressed sensing (CS), compressive sampling or sampling signals with finite rate of innovation (FRI). In this thesis we focus on the theory of sampling non-bandlimited signals with parametric structure or specifically signals with finite rate of innovation. Most of the theory on sampling FRI signals is based on a single acquisition device with one-dimensional (1-D) signals. In this thesis, we extend these results to the case of 2-D signals and multichannel acquisition systems. The essential issue in multichannel systems is that while each channel receives the input signal, it may introduce different unknown delays, gains or affine transformations which need to be estimated from the samples together with the signal itself. We pose both the calibration of the channels and the signal reconstruction stage as a parametric estimation problem and demonstrate that a simultaneous exact synchronization of the channels and reconstruction of the FRI signal is possible. Furthermore, because in practice perfect noise-free channels do not exist, we consider the case of noisy measurements and show that by considering Cramer-Rao bounds as well as numerical simulations, the multichannel systems are more resilient to noise than the single-channel ones. Finally, we consider the problem of system identification based on the multichannel and finite rate of innovation sampling techniques. First, by employing our multichannel sampling setup, we propose a novel algorithm for system identification problem with known input signal, that is for the case when both the input signal and the samples are known. Then we consider the problem of blind system identification and propose a novel algorithm for simultaneously estimating the input FRI signal and also the unknown system using an iterative algorithm

    Compressed Sensing of Analog Signals in Shift-Invariant Spaces

    Full text link
    A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for low-rate sampling of continuous-time sparse signals in shift-invariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finite-length vectors, we consider sampling of analog signals for which no underlying finite-dimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.Comment: to appear in IEEE Trans. on Signal Processin
    • …
    corecore