264 research outputs found

    Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples

    Get PDF
    This paper presents a novel power spectral density estimation technique for band-limited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and incorporates the advantages of compressed sensing (CS) when the power spectrum is sparse, but applies to sparse and nonsparse power spectra alike. The estimates are consistent piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. We show that compressive estimates exhibit better tradeoffs among the estimator's resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. For suitable sampling patterns, noncompressive estimates are obtained as least squares solutions. Because of the non-negativity of power spectra, compressive estimates can be computed by seeking non-negative least squares solutions (provided appropriate sampling patterns exist) instead of using standard CS recovery algorithms. This flexibility suggests a reduction in computational overhead for systems estimating both sparse and nonsparse power spectra because one algorithm can be used to compute both compressive and noncompressive estimates.Comment: 26 pages, single spaced, 9 figure

    Multidimensional random sampling for Fourier transform estimation

    Get PDF
    This research considers the Fourier transform calculations of multidimensional signals. The calculations are based on random sampling, where the sampling points are nonuniformly distributed according to strategically selected probability functions, to provide new opportunities that are unavailable in the uniform sampling environment. The latter imposes the sampling density of at least the Nyquist density. Otherwise, alias frequencies occur in the processed bandwidth which can lead to irresolvable processing problems. Random sampling can mitigate Nyquist limit that classical uniform-sampling-based approaches endure, for the purpose of performing direct (with no prefiltering or downconverting) Fourier analysis of (high-frequency) signals with unknown spectrum support using low sampling density. Lowering the sampling density while achieving the same signal processing objective could be an efficient, if not essential, way of exploiting the system resources in terms of power, hardware complexity and the acquisition-processing time. In this research we investigate and devise novel random sampling estimation schemes for multidimensional Fourier transform. The main focus of the investigation and development is on the aspect of the quality of estimated Fourier transform in terms of the sampling density. The former aspect is crucial as it serves towards the heart objective of random sampling of lowering the sampling density. This research was motivated by the applicability of the random-sampling-based approaches in determining the Fourier transform in multidimensional Nuclear Magnetic Resonance (NMR) spectroscopy to resolve the critical issue of its long experimental time

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Novel Digital Alias-Free Signal Processing Approaches to FIR Filtering Estimation

    Get PDF
    This thesis aims at developing a new methodology of filtering continuous-time bandlimited signals and piecewise-continuous signals from their discrete-time samples. Unlike the existing state-of-the-art filters, my filters are not adversely affected by aliasing, allowing the designers to flexibly select the sampling rates of the processed signal to reach the required accuracy of signal filtering rather than meeting stiff and often demanding constraints imposed by the classical theory of digital signal processing (DSP). The impact of this thesis is cost reduction of alias-free sampling, filtering and other digital processing blocks, particularly when the processed signals have sparse and unknown spectral support. Novel approaches are proposed which can mitigate the negative effects of aliasing, thanks to the use of nonuniform random/pseudorandom sampling and processing algorithms. As such, the proposed approaches belong to the family of digital alias-free signal processing (DASP). Namely, three main approaches are considered: total random (ToRa), stratified (StSa) and antithetical stratified (AnSt) random sampling techniques. First, I introduce a finite impulse response (FIR) filter estimator for each of the three considered techniques. In addition, a generalised estimator that encompasses the three filter estimators is also proposed. Then, statistical properties of all estimators are investigated to assess their quality. Properties such as expected value, bias, variance, convergence rate, and consistency are all inspected and unveiled. Moreover, closed-form mathematical expression is devised for the variance of each single estimator. Furthermore, quality assessment of the proposed estimators is examined in two main cases related to the smoothness status of the filter convolution’s integrand function, \u1d454(\u1d461,\u1d70f)∶=\u1d465(\u1d70f)ℎ(\u1d461−\u1d70f), and its first two derivatives. The first main case is continuous and differentiable functions \u1d454(\u1d461,\u1d70f), \u1d454′(\u1d461,\u1d70f), and \u1d454′′(\u1d461,\u1d70f). Whereas in the second main case, I cover all possible instances where some/all of such functions are piecewise-continuous and involving a finite number of bounded discontinuities. Primarily obtained results prove that all considered filter estimators are unbiassed and consistent. Hence, variances of the estimators converge to zero after certain number of sample points. However, the convergence rate depends on the selected estimator and which case of smoothness is being considered. In the first case (i.e. continuous \u1d454(\u1d461,\u1d70f) and its derivatives), ToRa, StSa and AnSt filter estimators converge uniformly at rates of \u1d441−1, \u1d441−3, and \u1d441−5 respectively, where 2\u1d441 is the total number of sample points. More interestingly, in the second main case, the convergence rates of StSa and AnSt estimators are maintained even if there are some discontinuities in the first-order derivative (FOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for StSa estimator) or in the second-order derivative (SOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for AnSt). Whereas these rates drop to \u1d441−2 and \u1d441−4 (for StSa and AnSt, respectively) if the zero-order derivative (ZOD) (for StSa) and FOD (for AnSt) are piecewise-continuous. Finally, if the ZOD of \u1d454(\u1d461,\u1d70f) is piecewise-continuous, then the uniform convergence rate of the AnSt estimator further drops to \u1d441−2. For practical reasons, I also introduce the utilisation of the three estimators in a special situation where the input signal is pseudorandomly sampled from otherwise uniform and dense grid. An FIR filter model with an oversampled finite-duration impulse response, timely aligned with the grid, is proposed and meant to be stored in a lookup table of the implemented filter’s memory to save processing time. Then, a synchronised convolution sum operation is conducted to estimate the filter output. Finally, a new unequally spaced Lagrange interpolation-based rule is proposed. The so-called composite 3-nonuniform-sample (C3NS) rule is employed to estimate area under the curve (AUC) of an integrand function rather than the simple Rectangular rule. I then carry out comparisons for the convergence rates of different estimators based on the two interpolation rules. The proposed C3NS estimator outperforms other Rectangular rule estimators on the expense of higher computational complexity. Of course, this extra cost could only be justifiable for some specific applications where more accurate estimation is required

    Estimation of Fourier Transform Using Alias-free Hybrid-Stratified Sampling

    Get PDF
    This paper proposes a novel method of estimating the Fourier Transform (FT) of deterministic, continuous-time signals, from a finite number \u1d441 of their samples taken from a fixed-length observation window. It uses alias-free hybrid-stratified sampling to probe the processed signal at a mixture of deterministic and random time instants. The FT estimator, specifically designed to work with this sampling scheme, is unbiased, consistent and fast converging. It is shown that if the processed signal has continuous third derivative, then the estimator's rate of uniform convergence in mean square is \u1d441^−5. Therefore, in terms of frequency-independent upper bounds on the FT estimation error, the proposed approach significantly outperforms existing estimators that utilize alias-free sampling, such as total random, stratified sampling, and antithetical stratified whose rate of uniform convergence is \u1d441^−1. It is proven here that \u1d441^−1 is a guaranteed minimum rate for all stratified-sampling-based estimators satisfying four weak conditions formulated in this paper. Owing to the alias-free nature of the sampling scheme, no constraints are imposed on the spectral support of the processed signal or the frequency ranges for which the Fourier Transform is estimated

    Applications of nonuniform sampling in wideband multichannel communication systems

    Get PDF
    This research is an investigation into utilising randomised sampling in communication systems to ease the sampling rate requirements of digitally processing narrowband signals residing within a wide range of overseen frequencies. By harnessing the aliasing suppression capabilities of such sampling schemes, it is shown that certain processing tasks, namely spectrum sensing, can be performed at significantly low sampling rates compared to those demanded by uniform-sampling-based digital signal processing. The latter imposes sampling frequencies of at least twice the monitored bandwidth regardless of the spectral activity within. Aliasing can otherwise result in irresolvable processing problems, as the spectral support of the present signal is a priori unknown. Lower sampling rates exploit the processing module(s) resources (such as power) more efficiently and avoid the possible need for premium specialised high-cost DSP, especially if the handled bandwidth is considerably wide. A number of randomised sampling schemes are examined and appropriate spectral analysis tools are used to furnish their salient features. The adopted periodogram-type estimators are tailored to each of the schemes and their statistical characteristics are assessed for stationary, and cyclostationary signals. Their ability to alleviate the bandwidth limitation of uniform sampling is demonstrated and the smeared-aliasing defect that accompanies randomised sampling is also quantified. In employing the aforementioned analysis tools a novel wideband spectrum sensing approach is introduced. It permits the simultaneous sensing of a number of nonoverlapping spectral subbands constituting a wide range of monitored frequencies. The operational sampling rates of the sensing procedure are not limited or dictated by the overseen bandwidth antithetical to uniform-sampling-based techniques. Prescriptive guidelines are developed to ensure that the proposed technique satisfies certain detection probabilities predefined by the user. These recommendations address the trade-off between the required sampling rate and the length of the signal observation window (sensing time) in a given scenario. Various aspects of the introduced multiband spectrum sensing approach are investigated and its applicability highlighted

    Theory and realization of novel algorithms for random sampling in digital signal processing

    Get PDF
    Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N(^2). The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N"^. The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% , of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement instruments where computing a spectrum is either minimal or not required. Such applications in instrumentation are easily found in the literature. In this thesis, two applications in digital signal processing are introduced. (5) To suggest an inverse transformation for random sampling so as to complete a two-way process and to broaden its scope of application. Apart from the above, a case study of realizing in a transputer network the prime factor algorithm with regular sampling is given in Chapter 2 and a rough estimation of the signal-to-noise ratio for a spectrum obtained from random sampling is found in Chapter 3. Although random sampling is alias-free, problems in computational complexity and noise prevent it from being adopted widely in engineering applications. In the conclusions, the criteria for adopting random sampling are put forward and the directions for its development are discussed
    • …
    corecore