6,735 research outputs found

    Channel Capacity under General Nonuniform Sampling

    Full text link
    This paper develops the fundamental capacity limits of a sampled analog channel under a sub-Nyquist sampling rate constraint. In particular, we derive the capacity of sampled analog channels over a general class of time-preserving sampling methods including irregular nonuniform sampling. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest SNR among all spectral sets of support size equal to the sampling rate. The capacity under sub-Nyquist sampling can be attained through filter-bank sampling, or through a single branch of modulation and filtering followed by uniform sampling. The capacity under sub-Nyquist sampling is a monotone function of the sampling rate. These results indicate that the optimal sampling schemes suppress aliasing, and that employing irregular nonuniform sampling does not provide capacity gain over uniform sampling sets with appropriate preprocessing for a large class of channels.Comment: 5 pages, to appear in IEEE International Symposium on Information Theory (ISIT), 201

    Channel Capacity under Sub-Nyquist Nonuniform Sampling

    Full text link
    This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which include irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while typically complicated to realize, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.Comment: accepted to IEEE Transactions on Information Theory, 201

    Geometric approach to sampling and communication

    Full text link
    Relationships that exist between the classical, Shannon-type, and geometric-based approaches to sampling are investigated. Some aspects of coding and communication through a Gaussian channel are considered. In particular, a constructive method to determine the quantizing dimension in Zador's theorem is provided. A geometric version of Shannon's Second Theorem is introduced. Applications to Pulse Code Modulation and Vector Quantization of Images are addressed.Comment: 19 pages, submitted for publicatio

    On the Minimax Capacity Loss under Sub-Nyquist Universal Sampling

    Full text link
    This paper investigates the information rate loss in analog channels when the sampler is designed to operate independent of the instantaneous channel occupancy. Specifically, a multiband linear time-invariant Gaussian channel under universal sub-Nyquist sampling is considered. The entire channel bandwidth is divided into nn subbands of equal bandwidth. At each time only kk constant-gain subbands are active, where the instantaneous subband occupancy is not known at the receiver and the sampler. We study the information loss through a capacity loss metric, that is, the capacity gap caused by the lack of instantaneous subband occupancy information. We characterize the minimax capacity loss for the entire sub-Nyquist rate regime, provided that the number nn of subbands and the SNR are both large. The minimax limits depend almost solely on the band sparsity factor and the undersampling factor, modulo some residual terms that vanish as nn and SNR grow. Our results highlight the power of randomized sampling methods (i.e. the samplers that consist of random periodic modulation and low-pass filters), which are able to approach the minimax capacity loss with exponentially high probability.Comment: accepted to IEEE Transactions on Information Theory. It has been presented in part at the IEEE International Symposium on Information Theory (ISIT) 201

    High-resolution distributed sampling of bandlimited fields with low-precision sensors

    Full text link
    The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval per snapshot is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an almost arbitrary discrete-valued manner, while retaining the same (exponential) distortion-rate characteristics. Achievable information scaling laws for field reconstruction over a bounded region are also derived: With N one-bit sensors per Nyquist-interval, Θ(logN)\Theta(\log N) Nyquist-intervals, and total network bitrate Rnet=Θ((logN)2)R_{net} = \Theta((\log N)^2) (per-sensor bitrate Θ((logN)/N)\Theta((\log N)/N)), the maximum pointwise distortion goes to zero as D=O((logN)2/N)D = O((\log N)^2/N) or D=O(Rnet2βRnet)D = O(R_{net} 2^{-\beta \sqrt{R_{net}}}). This is shown to be possible with only nearest-neighbor communication, distributed coding, and appropriate interpolation algorithms. For a fixed, nonzero target distortion, the number of fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal Processing and re-submitted to the IEEE Transactions on Information Theor
    corecore