242 research outputs found

    Channel Capacity under Sub-Nyquist Nonuniform Sampling

    Full text link
    This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which include irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while typically complicated to realize, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.Comment: accepted to IEEE Transactions on Information Theory, 201

    On the Relationship Between Uniform and Recurrent Nonuniform Discrete-Time Sampling Schemes

    Full text link

    Geometric approach to sampling and communication

    Full text link
    Relationships that exist between the classical, Shannon-type, and geometric-based approaches to sampling are investigated. Some aspects of coding and communication through a Gaussian channel are considered. In particular, a constructive method to determine the quantizing dimension in Zador's theorem is provided. A geometric version of Shannon's Second Theorem is introduced. Applications to Pulse Code Modulation and Vector Quantization of Images are addressed.Comment: 19 pages, submitted for publicatio

    Shannon Meets Nyquist: Capacity of Sampled Gaussian Channels

    Full text link
    We explore two fundamental questions at the intersection of sampling theory and information theory: how channel capacity is affected by sampling below the channel's Nyquist rate, and what sub-Nyquist sampling strategy should be employed to maximize capacity. In particular, we derive the capacity of sampled analog channels for three prevalent sampling strategies: sampling with filtering, sampling with filter banks, and sampling with modulation and filter banks. These sampling mechanisms subsume most nonuniform sampling techniques applied in practice. Our analyses illuminate interesting connections between under-sampled channels and multiple-input multiple-output channels. The optimal sampling structures are shown to extract out the frequencies with the highest SNR from each aliased frequency set, while suppressing aliasing and out-of-band noise. We also highlight connections between undersampled channel capacity and minimum mean-squared error (MSE) estimation from sampled data. In particular, we show that the filters maximizing capacity and the ones minimizing MSE are equivalent under both filtering and filter-bank sampling strategies. These results demonstrate the effect upon channel capacity of sub-Nyquist sampling techniques, and characterize the tradeoff between information rate and sampling rate.Comment: accepted to IEEE Transactions on Information Theory, 201

    Deep learning for fast and robust medical image reconstruction and analysis

    Get PDF
    Medical imaging is an indispensable component of modern medical research as well as clinical practice. Nevertheless, imaging techniques such as magnetic resonance imaging (MRI) and computational tomography (CT) are costly and are less accessible to the majority of the world. To make medical devices more accessible, affordable and efficient, it is crucial to re-calibrate our current imaging paradigm for smarter imaging. In particular, as medical imaging techniques have highly structured forms in the way they acquire data, they provide us with an opportunity to optimise the imaging techniques holistically by leveraging data. The central theme of this thesis is to explore different opportunities where we can exploit data and deep learning to improve the way we extract information for better, faster and smarter imaging. This thesis explores three distinct problems. The first problem is the time-consuming nature of dynamic MR data acquisition and reconstruction. We propose deep learning methods for accelerated dynamic MR image reconstruction, resulting in up to 10-fold reduction in imaging time. The second problem is the redundancy in our current imaging pipeline. Traditionally, imaging pipeline treated acquisition, reconstruction and analysis as separate steps. However, we argue that one can approach them holistically and optimise the entire pipeline jointly for a specific target goal. To this end, we propose deep learning approaches for obtaining high fidelity cardiac MR segmentation directly from significantly undersampled data, greatly exceeding the undersampling limit for image reconstruction. The final part of this thesis tackles the problem of interpretability of the deep learning algorithms. We propose attention-models that can implicitly focus on salient regions in an image to improve accuracy for ultrasound scan plane detection and CT segmentation. More crucially, these models can provide explainability, which is a crucial stepping stone for the harmonisation of smart imaging and current clinical practice.Open Acces

    Timing offset and quantization error trade-off in interleaved multi-channel measurements

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 117-118).Time-interleaved analog-to-digital converters (ADCs) are traditionally designed with equal quantization granularity in each channel and uniform sampling offsets. Recent work suggests that it is often possible to achieve a better signal-to-quantization noise ratio (SQNR) with different quantization granularity in each channel, non-uniform sampling, and appropriate reconstruction filtering. This thesis develops a framework for optimal design of non-uniform sampling constellations to maximize SQNR in time-interleaved ADCs. The first portion of this thesis investigates discrepancies between the additive noise model and uniform quantizers. A simulation is implemented for the multi-channel measurement and reconstruction system. The simulation reveals a key inconsistency in the environment of time-interleaved ADCs: cross-channel quantization error correlation. Statistical analysis is presented to characterize error correlation between quantizers with different granularities. A novel ADC architecture is developed based on weighted least squares (WLS) to exploit this correlation, with particular application for time-interleaved ADCs. A "correlated noise model" is proposed that incorporates error correlation between channels. The proposed model is shown to perform significantly better than the traditional additive noise model for channels in close proximity. The second portion of this thesis focuses on optimizing channel configurations in time-interleaved ADCs. Analytical and numerical optimization techniques are presented that rely on the additive noise model for determining non-uniform sampling constellations that maximize SQNR. Optimal constellations for critically sampled systems are always uniform, while solution sets for oversampled systems are larger. Systems with diverse bit allocations often exhibit "clusters" of low-precision channels in close proximity. Genetic optimization is shown to be effective for quickly and accurately determining optimal timing constellations in systems with many channels. Finally, a framework for efficient design of optimal channel configurations is formulated that incorporates statistical analysis of cross-channel quantization error correlation and solutions based on the additive noise model. For homogeneous bit allocations, the framework proposes timing offset corrections to avoid performance degradation from the optimal scenario predicted by the additive noise model. For diverse bit allocations, the framework proposes timing corrections and a "unification" of low-precision quantizers in close proximity. This technique results in significant improvements in performance above the previously known optimal additive noise model solution.by Joseph Gary McMichael.S.M

    Estimation and Calibration Algorithms for Distributed Sampling Systems

    Get PDF
    Thesis Supervisor: Gregory W. Wornell Title: Professor of Electrical Engineering and Computer ScienceTraditionally, the sampling of a signal is performed using a single component such as an analog-to-digital converter. However, many new technologies are motivating the use of multiple sampling components to capture a signal. In some cases such as sensor networks, multiple components are naturally found in the physical layout; while in other cases like time-interleaved analog-to-digital converters, additional components are added to increase the sampling rate. Although distributing the sampling load across multiple channels can provide large benefits in terms of speed, power, and resolution, a variety mismatch errors arise that require calibration in order to prevent a degradation in system performance. In this thesis, we develop low-complexity, blind algorithms for the calibration of distributed sampling systems. In particular, we focus on recovery from timing skews that cause deviations from uniform timing. Methods for bandlimited input reconstruction from nonuniform recurrent samples are presented for both the small-mismatch and the low-SNR domains. Alternate iterative reconstruction methods are developed to give insight into the geometry of the problem. From these reconstruction methods, we develop time-skew estimation algorithms that have high performance and low complexity even for large numbers of components. We also extend these algorithms to compensate for gain mismatch between sampling components. To understand the feasibility of implementation, analysis is also presented for a sequential implementation of the estimation algorithm. In distributed sampling systems, the minimum input reconstruction error is dependent upon the number of sampling components as well as the sample times of the components. We develop bounds on the expected reconstruction error when the time-skews are distributed uniformly. Performance is compared to systems where input measurements are made via projections onto random bases, an alternative to the sinc basis of time-domain sampling. From these results, we provide a framework on which to compare the effectiveness of any calibration algorithm. Finally, we address the topic of extreme oversampling, which pertains to systems with large amounts of oversampling due to redundant sampling components. Calibration algorithms are developed for ordering the components and for estimating the input from ordered components. The algorithms exploit the extra samples in the system to increase estimation performance and decrease computational complexity

    Learned Interferometric Imaging for the SPIDER Instrument

    Full text link
    The Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is an optical interferometric imaging device that aims to offer an alternative to the large space telescope designs of today with reduced size, weight and power consumption. This is achieved through interferometric imaging. State-of-the-art methods for reconstructing images from interferometric measurements adopt proximal optimization techniques, which are computationally expensive and require handcrafted priors. In this work we present two data-driven approaches for reconstructing images from measurements made by the SPIDER instrument. These approaches use deep learning to learn prior information from training data, increasing the reconstruction quality, and significantly reducing the computation time required to recover images by orders of magnitude. Reconstruction time is reduced to ∼10{\sim} 10 milliseconds, opening up the possibility of real-time imaging with SPIDER for the first time. Furthermore, we show that these methods can also be applied in domains where training data is scarce, such as astronomical imaging, by leveraging transfer learning from domains where plenty of training data are available.Comment: 21 pages, 14 figure
    • …
    corecore