170 research outputs found

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Signal Reconstruction From Nonuniform Samples Using Prolate Spheroidal Wave Functions: Theory and Application

    Get PDF
    Nonuniform sampling occurs in many applications due to imperfect sensors, mismatchedclocks or event-triggered phenomena. Indeed, natural images, biomedical responses andsensor network transmission have bursty structure so in order to obtain samples that correspondto the information content of the signal, one needs to collect more samples when thesignal changes fast and fewer samples otherwise which creates nonuniformly distibuted samples.On the other hand, with the advancements in the integrated circuit technology, smallscale and ultra low-power devices are available for several applications ranging from invasivebiomedical implants to environmental monitoring. However the advancements in the devicetechnologies also require data acquisition methods to be changed from the uniform (clockbased, synchronous) to nonuniform (clockless, asynchronous) processing. An important advancementis in the data reconstruction theorems from sub-Nyquist rate samples which wasrecently introduced as compressive sensing and that redenes the uncertainty principle. Inthis dissertation, we considered the problem of signal reconstruction from nonuniform samples.Our method is based on the Prolate Spheroidal Wave Functions (PSWF) which can beused in the reconstruction of time-limited and essentially band-limited signals from missingsamples, in event-driven sampling and in the case of asynchronous sigma delta modulation.We provide an implementable, general reconstruction framework for the issues relatedto reduction in the number of samples and estimation of nonuniform sample times. We alsoprovide a reconstruction method for level crossing sampling with regularization. Another way is to use projection onto convex sets (POCS) method. In this method we combinea time-frequency approach with the POCS iterative method and use PSWF for the reconstructionwhen there are missing samples. Additionally, we realize time decoding modulationfor an asynchronous sigma delta modulator which has potential applications in low-powerbiomedical implants

    A unified approach to sparse signal processing

    Get PDF
    A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, compo-nent analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing and rate of innovation. The redundancy introduced by channel coding i

    Novel Digital Alias-Free Signal Processing Approaches to FIR Filtering Estimation

    Get PDF
    This thesis aims at developing a new methodology of filtering continuous-time bandlimited signals and piecewise-continuous signals from their discrete-time samples. Unlike the existing state-of-the-art filters, my filters are not adversely affected by aliasing, allowing the designers to flexibly select the sampling rates of the processed signal to reach the required accuracy of signal filtering rather than meeting stiff and often demanding constraints imposed by the classical theory of digital signal processing (DSP). The impact of this thesis is cost reduction of alias-free sampling, filtering and other digital processing blocks, particularly when the processed signals have sparse and unknown spectral support. Novel approaches are proposed which can mitigate the negative effects of aliasing, thanks to the use of nonuniform random/pseudorandom sampling and processing algorithms. As such, the proposed approaches belong to the family of digital alias-free signal processing (DASP). Namely, three main approaches are considered: total random (ToRa), stratified (StSa) and antithetical stratified (AnSt) random sampling techniques. First, I introduce a finite impulse response (FIR) filter estimator for each of the three considered techniques. In addition, a generalised estimator that encompasses the three filter estimators is also proposed. Then, statistical properties of all estimators are investigated to assess their quality. Properties such as expected value, bias, variance, convergence rate, and consistency are all inspected and unveiled. Moreover, closed-form mathematical expression is devised for the variance of each single estimator. Furthermore, quality assessment of the proposed estimators is examined in two main cases related to the smoothness status of the filter convolution’s integrand function, \u1d454(\u1d461,\u1d70f)∶=\u1d465(\u1d70f)ℎ(\u1d461−\u1d70f), and its first two derivatives. The first main case is continuous and differentiable functions \u1d454(\u1d461,\u1d70f), \u1d454′(\u1d461,\u1d70f), and \u1d454′′(\u1d461,\u1d70f). Whereas in the second main case, I cover all possible instances where some/all of such functions are piecewise-continuous and involving a finite number of bounded discontinuities. Primarily obtained results prove that all considered filter estimators are unbiassed and consistent. Hence, variances of the estimators converge to zero after certain number of sample points. However, the convergence rate depends on the selected estimator and which case of smoothness is being considered. In the first case (i.e. continuous \u1d454(\u1d461,\u1d70f) and its derivatives), ToRa, StSa and AnSt filter estimators converge uniformly at rates of \u1d441−1, \u1d441−3, and \u1d441−5 respectively, where 2\u1d441 is the total number of sample points. More interestingly, in the second main case, the convergence rates of StSa and AnSt estimators are maintained even if there are some discontinuities in the first-order derivative (FOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for StSa estimator) or in the second-order derivative (SOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for AnSt). Whereas these rates drop to \u1d441−2 and \u1d441−4 (for StSa and AnSt, respectively) if the zero-order derivative (ZOD) (for StSa) and FOD (for AnSt) are piecewise-continuous. Finally, if the ZOD of \u1d454(\u1d461,\u1d70f) is piecewise-continuous, then the uniform convergence rate of the AnSt estimator further drops to \u1d441−2. For practical reasons, I also introduce the utilisation of the three estimators in a special situation where the input signal is pseudorandomly sampled from otherwise uniform and dense grid. An FIR filter model with an oversampled finite-duration impulse response, timely aligned with the grid, is proposed and meant to be stored in a lookup table of the implemented filter’s memory to save processing time. Then, a synchronised convolution sum operation is conducted to estimate the filter output. Finally, a new unequally spaced Lagrange interpolation-based rule is proposed. The so-called composite 3-nonuniform-sample (C3NS) rule is employed to estimate area under the curve (AUC) of an integrand function rather than the simple Rectangular rule. I then carry out comparisons for the convergence rates of different estimators based on the two interpolation rules. The proposed C3NS estimator outperforms other Rectangular rule estimators on the expense of higher computational complexity. Of course, this extra cost could only be justifiable for some specific applications where more accurate estimation is required

    Sampling from a system-theoretic viewpoint: Part II - Noncausal solutions

    Get PDF
    This paper puts to use concepts and tools introduced in Part I to address a wide spectrum of noncausal sampling and reconstruction problems. Particularly, we follow the system-theoretic paradigm by using systems as signal generators to account for available information and system norms (L2 and L∞) as performance measures. The proposed optimization-based approach recovers many known solutions, derived hitherto by different methods, as special cases under different assumptions about acquisition or reconstructing devices (e.g., polynomial and exponential cardinal splines for fixed samplers and the Sampling Theorem and its modifications in the case when both sampler and interpolator are design parameters). We also derive new results, such as versions of the Sampling Theorem for downsampling and reconstruction from noisy measurements, the continuous-time invariance of a wide class of optimal sampling-and-reconstruction circuits, etcetera

    Applications of nonuniform sampling in wideband multichannel communication systems

    Get PDF
    This research is an investigation into utilising randomised sampling in communication systems to ease the sampling rate requirements of digitally processing narrowband signals residing within a wide range of overseen frequencies. By harnessing the aliasing suppression capabilities of such sampling schemes, it is shown that certain processing tasks, namely spectrum sensing, can be performed at significantly low sampling rates compared to those demanded by uniform-sampling-based digital signal processing. The latter imposes sampling frequencies of at least twice the monitored bandwidth regardless of the spectral activity within. Aliasing can otherwise result in irresolvable processing problems, as the spectral support of the present signal is a priori unknown. Lower sampling rates exploit the processing module(s) resources (such as power) more efficiently and avoid the possible need for premium specialised high-cost DSP, especially if the handled bandwidth is considerably wide. A number of randomised sampling schemes are examined and appropriate spectral analysis tools are used to furnish their salient features. The adopted periodogram-type estimators are tailored to each of the schemes and their statistical characteristics are assessed for stationary, and cyclostationary signals. Their ability to alleviate the bandwidth limitation of uniform sampling is demonstrated and the smeared-aliasing defect that accompanies randomised sampling is also quantified. In employing the aforementioned analysis tools a novel wideband spectrum sensing approach is introduced. It permits the simultaneous sensing of a number of nonoverlapping spectral subbands constituting a wide range of monitored frequencies. The operational sampling rates of the sensing procedure are not limited or dictated by the overseen bandwidth antithetical to uniform-sampling-based techniques. Prescriptive guidelines are developed to ensure that the proposed technique satisfies certain detection probabilities predefined by the user. These recommendations address the trade-off between the required sampling rate and the length of the signal observation window (sensing time) in a given scenario. Various aspects of the introduced multiband spectrum sensing approach are investigated and its applicability highlighted

    An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition]

    Get PDF
    This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality. Our intent in this article is to overview the basic CS theory that emerged in the works [1]–[3], present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a tutorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness can — perhaps surprisingly — lead to very effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications

    Weighted frames of exponentials and stable recovery of multidimensional functions from nonuniform Fourier samples

    Full text link
    In this paper, we consider the problem of recovering a compactly supported multivariate function from a collection of pointwise samples of its Fourier transform taken nonuniformly. We do this by using the concept of weighted Fourier frames. A seminal result of Beurling shows that sample points give rise to a classical Fourier frame provided they are relatively separated and of sufficient density. However, this result does not allow for arbitrary clustering of sample points, as is often the case in practice. Whilst keeping the density condition sharp and dimension independent, our first result removes the separation condition and shows that density alone suffices. However, this result does not lead to estimates for the frame bounds. A known result of Groechenig provides explicit estimates, but only subject to a density condition that deteriorates linearly with dimension. In our second result we improve these bounds by reducing the dimension dependence. In particular, we provide explicit frame bounds which are dimensionless for functions having compact support contained in a sphere. Next, we demonstrate how our two main results give new insight into a reconstruction algorithm---based on the existing generalized sampling framework---that allows for stable and quasi-optimal reconstruction in any particular basis from a finite collection of samples. Finally, we construct sufficiently dense sampling schemes that are often used in practice---jittered, radial and spiral sampling schemes---and provide several examples illustrating the effectiveness of our approach when tested on these schemes

    Distortion-Rate Function of Sub-Nyquist Sampled Gaussian Sources

    Full text link
    The amount of information lost in sub-Nyquist sampling of a continuous-time Gaussian stationary process is quantified. We consider a combined source coding and sub-Nyquist reconstruction problem in which the input to the encoder is a noisy sub-Nyquist sampled version of the analog source. We first derive an expression for the mean squared error in the reconstruction of the process from a noisy and information rate-limited version of its samples. This expression is a function of the sampling frequency and the average number of bits describing each sample. It is given as the sum of two terms: Minimum mean square error in estimating the source from its noisy but otherwise fully observed sub-Nyquist samples, and a second term obtained by reverse waterfilling over an average of spectral densities associated with the polyphase components of the source. We extend this result to multi-branch uniform sampling, where the samples are available through a set of parallel channels with a uniform sampler and a pre-sampling filter in each branch. Further optimization to reduce distortion is then performed over the pre-sampling filters, and an optimal set of pre-sampling filters associated with the statistics of the input signal and the sampling frequency is found. This results in an expression for the minimal possible distortion achievable under any analog to digital conversion scheme involving uniform sampling and linear filtering. These results thus unify the Shannon-Whittaker-Kotelnikov sampling theorem and Shannon rate-distortion theory for Gaussian sources.Comment: Accepted for publication at the IEEE transactions on information theor
    corecore