9 research outputs found

    From Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals

    Full text link
    Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.Comment: 17 pages, 12 figures, to appear in IEEE Journal of Selected Topics in Signal Processing, the special issue on Compressed Sensin

    Non-uniform sampling and reconstruction of multi-band signals and its application in wideband spectrum sensing of cognitive radio

    Full text link
    Sampling theories lie at the heart of signal processing devices and communication systems. To accommodate high operating rates while retaining low computational cost, efficient analog-to digital (ADC) converters must be developed. Many of limitations encountered in current converters are due to a traditional assumption that the sampling state needs to acquire the data at the Nyquist rate, corresponding to twice the signal bandwidth. In this thesis a method of sampling far below the Nyquist rate for sparse spectrum multiband signals is investigated. The method is called periodic non-uniform sampling, and it is useful in a variety of applications such as data converters, sensor array imaging and image compression. Firstly, a model for the sampling system in the frequency domain is prepared. It relates the Fourier transform of observed compressed samples with the unknown spectrum of the signal. Next, the reconstruction process based on the topic of compressed sensing is provided. We show that the sampling parameters play an important role on the average sample ratio and the quality of the reconstructed signal. The concept of condition number and its effect on the reconstructed signal in the presence of noise is introduced, and a feasible approach for choosing a sample pattern with a low condition number is given. We distinguish between the cases of known spectrum and unknown spectrum signals respectively. One of the model parameters is determined by the signal band locations that in case of unknown spectrum signals should be estimated from sampled data. Therefore, we applied both subspace methods and non-linear least square methods for estimation of this parameter. We also used the information theoretic criteria (Akaike and MDL) and the exponential fitting test techniques for model order selection in this case

    Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals

    Get PDF
    Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.Comment: 24 pages, 8 figure

    Sensors and analog-to-information converters

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 93-96).Compressed sensing (CS) is a promising method for recovering sparse signals from fewer measurements than ordinarily used in the Shannon's sampling theorem [14]. Introducing the CS theory has sparked interest in designing new hardware architectures which can be potential substitutions for traditional architectures in communication systems. CS-based wireless sensors and analog-to-information converters (AIC) are two examples of CS-based systems. It has been claimed that such systems can potentially provide higher performance and lower power consumption compared to traditional systems. However, since there is no end-to-end hardware implementation of these systems, it is difficult to make a fair hardware-to-hardware comparison with other implemented systems. This project aims to fill this gap by examining the energy-performance design space for CS in the context of both practical wireless sensors and AICs. One of the limitations of CS-based systems is that they employ iterative algorithms to recover the signal. Since these algorithms are slow, the hardware solution has become crucial for higher performance and speed. In this work, we also implement a suitable CS reconstruction algorithm in hardware.by Omid Salehi-Abari.S.M

    Regime Change: Sampling Rate vs. Bit-Depth in Compressive Sensing

    Get PDF
    The compressive sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by exploiting inherent structure in natural and man-made signals. It has been demonstrated that structured signals can be acquired with just a small number of linear measurements, on the order of the signal complexity. In practice, this enables lower sampling rates that can be more easily achieved by current hardware designs. The primary bottleneck that limits ADC sampling rates is quantization, i.e., higher bit-depths impose lower sampling rates. Thus, the decreased sampling rates of CS ADCs accommodate the otherwise limiting quantizer of conventional ADCs. In this thesis, we consider a different approach to CS ADC by shifting towards lower quantizer bit-depths rather than lower sampling rates. We explore the extreme case where each measurement is quantized to just one bit, representing its sign. We develop a new theoretical framework to analyze this extreme case and develop new algorithms for signal reconstruction from such coarsely quantized measurements. The 1-bit CS framework leads us to scenarios where it may be more appropriate to reduce bit-depth instead of sampling rate. We find that there exist two distinct regimes of operation that correspond to high/low signal-to-noise ratio (SNR). In the measurement compression (MC) regime, a high SNR favors acquiring fewer measurements with more bits per measurement (as in conventional CS); in the quantization compression (QC) regime, a low SNR favors acquiring more measurements with fewer bits per measurement (as in this thesis). A surprise from our analysis and experiments is that in many practical applications it is better to operate in the QC regime, even acquiring as few as 1 bit per measurement. The above philosophy extends further to practical CS ADC system designs. We propose two new CS architectures, one of which takes advantage of the fact that the sampling and quantization operations are performed by two different hardware components. The former can be employed at high rates with minimal costs while the latter cannot. Thus, we develop a system that discretizes in time, performs CS preconditioning techniques, and then quantizes at a low rate

    Low-rank matrix recovery: blind deconvolution and efficient sampling of correlated signals

    Get PDF
    Low-dimensional signal structures naturally arise in a large set of applications in various fields such as medical imaging, machine learning, signal, and array processing. A ubiquitous low-dimensional structure in signals and images is sparsity, and a new sampling theory; namely, compressive sensing, proves that the sparse signals and images can be reconstructed from incomplete measurements. The signal recovery is achieved using efficient algorithms such as \ell_1-minimization. Recently, the research focus has spun-off to encompass other interesting low-dimensional signal structures such as group-sparsity and low-rank structure. This thesis considers low-rank matrix recovery (LRMR) from various structured-random measurement ensembles. These results are then employed for the in depth investigation of the classical blind-deconvolution problem from a new perspective, and for the development of a framework for the efficient sampling of correlated signals (the signals lying in a subspace). In the first part, we study the blind deconvolution; separation of two unknown signals by observing their convolution. We recast the deconvolution of discrete signals w and x as a rank-1 matrix wx* recovery problem from a structured random measurement ensemble. The convex relaxation of the problem leads to a tractable semidefinite program. We show, using some of the mathematical tools developed recently for LRMR, that if we assume the signals convolved with one another live in known subspaces, then this semidefinite relaxation is provably effective. In the second part, we design various efficient sampling architectures for signals acquired using large arrays. The sampling architectures exploit the correlation in the signals to acquire them at a sub-Nyquist rate. The sampling devices are designed using analog components with clear implementation potential. For each of the sampling scheme, we show that the signal reconstruction can be framed as an LRMR problem from a structured-random measurement ensemble. The signals can be reconstructed using the familiar nuclear-norm minimization. The sampling theorems derived for each of the sampling architecture show that the LRMR framework produces the Shannon-Nyquist performance for the sub-Nyquist acquisition of correlated signals. In the final part, we study low-rank matrix factorizations using randomized linear algebra. This specific method allows us to use a least-squares program for the reconstruction of the unknown low-rank matrix from the samples of its row and column space. Based on the principles of this method, we then design sampling architectures that not only acquire correlated signals efficiently but also require a simple least-squares program for the signal reconstruction. A theoretical analysis of all of the LRMR problems above is presented in this thesis, which provides the sufficient measurements required for the successful reconstruction of the unknown low-rank matrix, and the upper bound on the recovery error in both noiseless and noisy cases. For each of the LRMR problem, we also provide a discussion of a computationally feasible algorithm, which includes a least-squares-based algorithm, and some of the fastest algorithms for solving nuclear-norm minimization.Ph.D
    corecore