5,014 research outputs found

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Xampling: Signal Acquisition and Processing in Union of Subspaces

    Full text link
    We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two. Analog compression that narrows down the input bandwidth prior to sampling with commercial devices. A nonlinear algorithm then detects the input subspace prior to conventional signal processing. A representative union model of spectrally-sparse signals serves as a test-case to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy and software complexities. We conduct a comprehensive comparison between two sub-Nyquist acquisition strategies for spectrally-sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at sub-Nyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.Comment: 16 pages, 9 figures, submitted to IEEE for possible publicatio

    Spatial Characteristics of Distortion Radiated from Antenna Arrays with Transceiver Nonlinearities

    Full text link
    The distortion from massive MIMO (multiple-input--multiple-output) base stations with nonlinear amplifiers is studied and its radiation pattern is derived. The distortion is analyzed both in-band and out-of-band. By using an orthogonal Hermite representation of the amplified signal, the spatial cross-correlation matrix of the nonlinear distortion is obtained. It shows that, if the input signal to the amplifiers has a dominant beam, the distortion is beamformed in the same way as that beam. When there are multiple beams without any one being dominant, it is shown that the distortion is practically isotropic. The derived theory is useful to predict how the nonlinear distortion will behave, to analyze the out-of-band radiation, to do reciprocity calibration, and to schedule users in the frequency plane to minimize the effect of in-band distortion

    Real-time signal detection and classification algorithms for body-centered systems

    Full text link
    El principal motivo por el cual los sistemas de comunicación en el entrono corporal se desean con el objetivo de poder obtener y procesar señales biométricas para monitorizar e incluso tratar una condición médica sea ésta causada por una enfermedad o el rendimiento de un atleta. Dado que la base de estos sistemas está en la sensorización y el procesado, los algoritmos de procesado de señal son una parte fundamental de los mismos. Esta tesis se centra en los algoritmos de tratamiento de señales en tiempo real que se utilizan tanto para monitorizar los parámetros como para obtener la información que resulta relevante de las señales obtenidas. En la primera parte se introduce los tipos de señales y sensores en los sistemas en el entrono corporal. A continuación se desarrollan dos aplicaciones concretas de los sistemas en el entorno corporal así como los algoritmos que en las mismas se utilizan. La primera aplicación es el control de glucosa en sangre en pacientes con diabetes. En esta parte se desarrolla un método de detección mediante clasificación de patronones de medidas erróneas obtenidas con el monitor contínuo comercial "Minimed CGMS". La segunda aplicacióin consiste en la monitorizacióni de señales neuronales. Descubrimientos recientes en este campo han demostrado enormes posibilidades terapéuticas (por ejemplo, pacientes con parálisis total que son capaces de comunicarse con el entrono gracias a la monitorizacióin e interpretación de señales provenientes de sus neuronas) y también de entretenimiento. En este trabajo, se han desarrollado algoritmos de detección, clasificación y compresión de impulsos neuronales y dichos algoritmos han sido evaluados junto con técnicas de transmisión inalámbricas que posibiliten una monitorización sin cables. Por último, se dedica un capítulo a la transmisión inalámbrica de señales en los sistemas en el entorno corporal. En esta parte se estudia las condiciones del canal que presenta el entorno corporal para la transmisión de sTraver Sebastiá, L. (2012). Real-time signal detection and classification algorithms for body-centered systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16188Palanci

    State–of–the–art report on nonlinear representation of sources and channels

    Get PDF
    This report consists of two complementary parts, related to the modeling of two important sources of nonlinearities in a communications system. In the first part, an overview of important past work related to the estimation, compression and processing of sparse data through the use of nonlinear models is provided. In the second part, the current state of the art on the representation of wireless channels in the presence of nonlinearities is summarized. In addition to the characteristics of the nonlinear wireless fading channel, some information is also provided on recent approaches to the sparse representation of such channels

    Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry

    Get PDF
    In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives
    corecore