140 research outputs found

    Channel Capacity under General Nonuniform Sampling

    Full text link
    This paper develops the fundamental capacity limits of a sampled analog channel under a sub-Nyquist sampling rate constraint. In particular, we derive the capacity of sampled analog channels over a general class of time-preserving sampling methods including irregular nonuniform sampling. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest SNR among all spectral sets of support size equal to the sampling rate. The capacity under sub-Nyquist sampling can be attained through filter-bank sampling, or through a single branch of modulation and filtering followed by uniform sampling. The capacity under sub-Nyquist sampling is a monotone function of the sampling rate. These results indicate that the optimal sampling schemes suppress aliasing, and that employing irregular nonuniform sampling does not provide capacity gain over uniform sampling sets with appropriate preprocessing for a large class of channels.Comment: 5 pages, to appear in IEEE International Symposium on Information Theory (ISIT), 201

    Channel Capacity under Sub-Nyquist Nonuniform Sampling

    Full text link
    This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which include irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while typically complicated to realize, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.Comment: accepted to IEEE Transactions on Information Theory, 201

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Sampling systems matched to input processes and image classes

    Get PDF
    This dissertation investigates sampling and reconstruction of wide sense stationary (WSS) random processes from their sample random variables . In this context, two types of sampling systems are studied, namely, interpolation and approximation sampling systems. We aim to determine the properties of the filters in these systems that minimize the mean squared error between the input process and the process reconstructed from its samples. More specifically, for the interpolation sampling system we seek and obtain a closed form expression for an interpolation filter that is optimal in this sense. Likewise, for the approximation sampling system we derive a closed form expression for an optimal reconstruction filter given the statistics of the input process and the antialiasing filter. Using these expressions we show that Meyer-type scaling functions and wavelets arise naturally in the context of subsampled bandlimited processes. We also derive closed form expressions for the mean squared error incurred by both the sampling systems. Using the expression for mean squared error we show that for an approximation sampling system, minimum mean squared error is obtained when the antialiasing filter and the reconstruction filter are spectral factors of an ideal brickwall-type filter. Similar results are derived for the discrete-time equivalents of these sampling systems. Finally, we give examples of interpolation and approximation sampling filters and compare their performance with that of some standard filters. The implementation of these systems is based on a novel framework called the perfect reconstruction circular convolution (PRCC) filter bank framework. The results obtained for the one dimensional case are extended to the multidimensional case. Sampling a multidimensional random field or image class has a greater degree of freedom and the sampling lattice can be defined by a nonsingular matrix D. The aim is to find optimal filters in multidimensional sampling systems to reconstruct the input image class from its samples on a lattice defined by D. Closed form expressions for filters in multidimensional interpolation and approximation sampling systems are obtained as are expressions for the mean squared error incurred by each system. For the approximation sampling system it is proved that the antialiasing and reconstruction filters that minimize the mean squared error are spectral factors of an ideal brickwall-type filter whose support depends on the sampling matrix D. Finally. we give examples of filters in the interpolation and approximation sampling systems for an image class derived from a LANDSAT image and a quincunx sampling lattice. The performance of these filters is compared with that of some standard filters in the presence of a quantizer

    Generalized sampling theorems in multiresolution subspaces

    Get PDF
    It is well known that under very mild conditions on the scaling function, multiresolution subspaces are reproducing kernel Hilbert spaces (RKHSs). This allows for the development of a sampling theory. In this paper, we extend the existing sampling theory for wavelet subspaces in several directions. We consider periodically nonuniform sampling, sampling of a function and its derivatives, oversampling, multiband sampling, and reconstruction from local averages. All these problems are treated in a unified way using the perfect reconstruction (PR) filter bank theory. We give conditions for stable reconstructions in each of these cases. Sampling theorems developed in the past do not allow the scaling function and the synthesizing function to be both compactly supported, except in trivial cases. This restriction no longer applies for the generalizations we study here, due to the existence of FIR PR banks. In fact, with nonuniform sampling, oversampling, and reconstruction from local averages, we can guarantee compactly supported synthesizing functions. Moreover, local averaging schemes have additional nice properties (robustness to the input noise and compression capabilities). We also show that some of the proposed methods can be used for efficient computation of inner products in multiresolution analysis. After this, we extend the sampling theory to random processes. We require autocorrelation functions to belong to some subspace related to wavelet subspaces. It turns out that we cannot recover random processes themselves (unless they are bandlimited) but only their power spectral density functions. We consider both uniform and nonuniform sampling

    Shannon Meets Nyquist: Capacity of Sampled Gaussian Channels

    Full text link
    We explore two fundamental questions at the intersection of sampling theory and information theory: how channel capacity is affected by sampling below the channel's Nyquist rate, and what sub-Nyquist sampling strategy should be employed to maximize capacity. In particular, we derive the capacity of sampled analog channels for three prevalent sampling strategies: sampling with filtering, sampling with filter banks, and sampling with modulation and filter banks. These sampling mechanisms subsume most nonuniform sampling techniques applied in practice. Our analyses illuminate interesting connections between under-sampled channels and multiple-input multiple-output channels. The optimal sampling structures are shown to extract out the frequencies with the highest SNR from each aliased frequency set, while suppressing aliasing and out-of-band noise. We also highlight connections between undersampled channel capacity and minimum mean-squared error (MSE) estimation from sampled data. In particular, we show that the filters maximizing capacity and the ones minimizing MSE are equivalent under both filtering and filter-bank sampling strategies. These results demonstrate the effect upon channel capacity of sub-Nyquist sampling techniques, and characterize the tradeoff between information rate and sampling rate.Comment: accepted to IEEE Transactions on Information Theory, 201

    Filter Bank Multicarrier Modulation for Spectrally Agile Waveform Design

    Get PDF
    In recent years the demand for spectrum has been steadily growing. With the limited amount of spectrum available, Spectrum Pooling has gained immense popularity. As a result of various studies, it has been established that most of the licensed spectrum remains underutilized. Spectrum Pooling or spectrum sharing concentrates on making the most of these whitespaces in the licensed spectrum. These unused parts of the spectrum are usually available in chunks. A secondary user looking to utilize these chunks needs a device capable of transmitting over distributed frequencies, while not interfering with the primary user. Such a process is known as Dynamic Spectrum Access (DSA) and a device capable of it is known as Cognitive Radio. In such a scenario, multicarrier communication that transmits data across the channel in several frequency subcarriers at a lower data rate has gained prominence. Its appeal lies in the fact that it combats frequency selective fading. Two methods for implementing multicarrier modulation are non-contiguous orthogonal frequency division multiplexing (NCOFDM)and filter bank multicarrier modulation (FBMC). This thesis aims to implement a novel FBMC transmitter using software defined radio (SDR) with modulated filters based on a lowpass prototype. FBMCs employ two sets of bandpass filters called analysis and synthesis filters, one at the transmitter and the other at the receiver, in order to filter the collection of subcarriers being transmitted simultaneously in parallel frequencies. The novel aspect of this research is that a wireless transmitter based on non-contiguous FBMC is being used to design spectrally agile waveforms for dynamic spectrum access as opposed to the more popular NC-OFDM. Better spectral containment and bandwidth efficiency, combined with lack of cyclic prefix processing, makes it a viable alternative for NC-OFDM. The main aim of this thesis is to prove that FBMC can be practically implemented for wireless communications. The practicality of the method is tested by transmitting the FBMC signals real time by using the Simulink environment and USRP2 hardware modules

    Analog‐to‐Digital Conversion for Cognitive Radio: Subsampling, Interleaving, and Compressive Sensing

    Get PDF
    This chapter explores different analog-to-digital conversion techniques that are suitable to be implemented in cognitive radio receivers. This chapter details the fundamentals, advantages, and drawbacks of three promising techniques: subsampling, interleaving, and compressive sensing. Due to their major maturity, subsampling- and interleaving-based systems are described in further detail, whereas compressive sensing-based systems are described as a complement of the previous techniques for underutilized spectrum applications. The feasibility of these techniques as part of software-defined radio, multistandard, and spectrum sensing receivers is demonstrated by proposing different architectures with reduced complexity at circuit level, depending on the application requirements. Additionally, the chapter proposes different solutions to integrate the advantages of these techniques in a unique analog-to-digital conversion process
    • 

    corecore