102 research outputs found

    Overview of compressed sensing: Sensing model, reconstruction algorithm, and its applications

    Get PDF
    With the development of intelligent networks such as the Internet of Things, network scales are becoming increasingly larger, and network environments increasingly complex, which brings a great challenge to network communication. The issues of energy-saving, transmission efficiency, and security were gradually highlighted. Compressed sensing (CS) helps to simultaneously solve those three problems in the communication of intelligent networks. In CS, fewer samples are required to reconstruct sparse or compressible signals, which breaks the restrict condition of a traditional Nyquist-Shannon sampling theorem. Here, we give an overview of recent CS studies, along the issues of sensing models, reconstruction algorithms, and their applications. First, we introduce several common sensing methods for CS, like sparse dictionary sensing, block-compressed sensing, and chaotic compressed sensing. We also present several state-of-the-art reconstruction algorithms of CS, including the convex optimization, greedy, and Bayesian algorithms. Lastly, we offer recommendation for broad CS applications, such as data compression, image processing, cryptography, and the reconstruction of complex networks. We discuss works related to CS technology and some CS essentials. © 2020 by the authors

    Laterally constrained low-rank seismic data completion via cyclic-shear transform

    Full text link
    A crucial step in seismic data processing consists in reconstructing the wavefields at spatial locations where faulty or absent sources and/or receivers result in missing data. Several developments in seismic acquisition and interpolation strive to restore signals fragmented by sampling limitations; still, seismic data frequently remain poorly sampled in the source, receiver, or both coordinates. An intrinsic limitation of real-life dense acquisition systems, which are often exceedingly expensive, is that they remain unable to circumvent various physical and environmental obstacles, ultimately hindering a proper recording scheme. In many situations, when the preferred reconstruction method fails to render the actual continuous signals, subsequent imaging studies are negatively affected by sampling artefacts. A recent alternative builds on low-rank completion techniques to deliver superior restoration results on seismic data, paving the way for data kernel compression that can potentially unlock multiple modern processing methods so far prohibited in 3D field scenarios. In this work, we propose a novel transform domain revealing the low-rank character of seismic data that prevents the inherent matrix enlargement introduced when the data are sorted in the midpoint-offset domain and develop a robust extension of the current matrix completion framework to account for lateral physical constraints that ensure a degree of proximity similarity among neighbouring points. Our strategy successfully interpolates missing sources and receivers simultaneously in synthetic and field data

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Sub-Nyquist Wideband Spectrum Sensing and Sharing

    Get PDF
    PhDThe rising popularity of wireless services resulting in spectrum shortage has motivated dynamic spectrum sharing to facilitate e cient usage of the underutilized spectrum. Wideband spectrum sensing is a critical functionality to enable dynamic spectrum access by enhancing the opportunities of exploring spectral holes, but entails a major implemen- tation challenge in compact commodity radios that have limited energy and computation capabilities. The sampling rates speci ed by the Shannon-Nyquist theorem impose great challenges both on the acquisition hardware and the subsequent storage and digital sig- nal processors. Sub-Nyquist sampling was thus motivated to sample wideband signals at rates far lower than the Nyquist rate, while still retaining the essential information in the underlying signals. This thesis proposes several algorithms for invoking sub-Nyquist sampling in wideband spectrum sensing. Speci cally, a sub-Nyquist wideband spectrum sensing algorithm is proposed that achieves wideband sensing independent of signal sparsity without sampling at full bandwidth by using the low-speed analog-to-digital converters based on sparse Fast Fourier Transform. To lower signal spectrum sparsity while maintaining the channel state information, the received signal is pre-processed through a proposed permutation and ltering algorithm. Additionally, a low-complexity sub-Nyquist wideband spectrum sensing scheme is proposed that locates occupied channels blindly by recovering the sig- nal support, based on the jointly sparse nature of multiband signals. Exploiting the common signal support shared among multiple secondary users, an e cient coopera- tive spectrum sensing scheme is developed, in which the energy consumption on signal acquisition, processing, and transmission is reduced with the detection performance guar- antee. To further reduce the computation complexity of wideband spectrum sensing, a hybrid framework of sub-Nyquist wideband spectrum sensing with geolocation database is explored. Prior channel information from geolocation database is utilized in the sens- ing process to reduce the processing requirements on the sensor nodes. The models of the proposed algorithms are derived and veri ed by numerical analyses and tested on both real-world and simulated TV white space signals

    Compressive Sensing and Multichannel Spike Detection for Neuro-Recording Systems

    Get PDF
    RÉSUMÉ Les interfaces cerveau-machines (ICM) sont de plus en plus importantes dans la recherche biomédicale et ses applications, tels que les tests et analyses médicaux en laboratoire, la cérébrologie et le traitement des dysfonctions neuromusculaires. Les ICM en général et les dispositifs d'enregistrement neuronaux, en particulier, dépendent fortement des méthodes de traitement de signaux utilisées pour fournir aux utilisateurs des renseignements sur l’état de diverses fonctions du cerveau. Les dispositifs d'enregistrement neuronaux courants intègrent de nombreux canaux parallèles produisant ainsi une énorme quantité de données. Celles-ci sont difficiles à transmettre, peuvent manquer une information précieuse des signaux enregistrés et limitent la capacité de traitement sur puce. Une amélioration de fonctions de traitement du signal est nécessaire pour s’assurer que les dispositifs d'enregistrements neuronaux peuvent faire face à l'augmentation rapide des exigences de taille de données et de précision requise de traitement. Cette thèse regroupe deux approches principales de traitement du signal - la compression et la réduction de données - pour les dispositifs d'enregistrement neuronaux. Tout d'abord, l’échantillonnage comprimé (AC) pour la compression du signal neuronal a été utilisé. Ceci implique l’usage d’une matrice de mesure déterministe basée sur un partitionnement selon le minimum de la distance Euclidienne ou celle de la distance de Manhattan (MDC). Nous avons comprimé les signaux neuronaux clairsemmés (Sparse) et non-clairsemmés et les avons reconstruit avec une marge d'erreur minimale en utilisant la matrice MDC construite plutôt. La réduction de données provenant de signaux neuronaux requiert la détection et le classement de potentiels d’actions (PA, ou spikes) lesquelles étaient réalisées en se servant de la méthode d’appariement de formes (templates) avec l'inférence bayésienne (Bayesian inference based template matching - BBTM). Par comparaison avec les méthodes fondées sur l'amplitude, sur le niveau d’énergie ou sur l’appariement de formes, la BBTM a une haute précision de détection, en particulier pour les signaux à faible rapport signal-bruit et peut séparer les potentiels d’actions reçus à partir des différents neurones et qui chevauchent. Ainsi, la BBTM peut automatiquement produire les appariements de formes nécessaires avec une complexité de calculs relativement faible.----------ABSTRACT Brain-Machine Interfaces (BMIs) are increasingly important in biomedical research and health care applications, such as medical laboratory tests and analyses, cerebrology, and complementary treatment of neuromuscular disorders. BMIs, and neural recording devices in particular, rely heavily on signal processing methods to provide users with nformation. Current neural recording devices integrate many parallel channels, which produce a huge amount of data that is difficult to transmit, cannot guarantee the quality of the recorded signals and may limit on-chip signal processing capabilities. An improved signal processing system is needed to ensure that neural recording devices can cope with rapidly increasing data size and accuracy requirements. This thesis focused on two signal processing approaches – signal compression and reduction – for neural recording devices. First, compressed sensing (CS) was employed for neural signal compression, using a minimum Euclidean or Manhattan distance cluster-based (MDC) deterministic sensing matrix. Sparse and non-sparse neural signals were substantially compressed and later reconstructed with minimal error using the built MDC matrix. Neural signal reduction required spike detection and sorting, which was conducted using a Bayesian inference-based template matching (BBTM) method. Compared with amplitude-based, energy-based, and some other template matching methods, BBTM has high detection accuracy, especially for low signal-to-noise ratio signals, and can separate overlapping spikes acquired from different neurons. In addition, BBTM can automatically generate the needed templates with relatively low system complexity. Finally, a digital online adaptive neural signal processing system, including spike detector and CS-based compressor, was designed. Both single and multi-channel solutions were implemented and evaluated. Compared with the signal processing systems in current use, the proposed signal processing system can efficiently compress a large number of sampled data and recover original signals with a small reconstruction error; also it has low power consumption and a small silicon area. The completed prototype shows considerable promise for application in a wide range of neural recording interfaces

    Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals

    Get PDF
    Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.Comment: 24 pages, 8 figure

    Adapted Compressed Sensing: A Game Worth Playing

    Get PDF
    Despite the universal nature of the compressed sensing mechanism, additional information on the class of sparse signals to acquire allows adjustments that yield substantial improvements. In facts, proper exploitation of these priors allows to significantly increase compression for a given reconstruction quality. Since one of the most promising scopes of application of compressed sensing is that of IoT devices subject to extremely low resource constraint, adaptation is especially interesting when it can cope with hardware-related constraint allowing low complexity implementations. We here review and compare many algorithmic adaptation policies that focus either on the encoding part or on the recovery part of compressed sensing. We also review other more hardware-oriented adaptation techniques that are actually able to make the difference when coming to real-world implementations. In all cases, adaptation proves to be a tool that should be mastered in practical applications to unleash the full potential of compressed sensing
    • …
    corecore