45 research outputs found

    Anti-noise-folding regularized subspace pursuit recovery algorithm for noisy sparse signals

    Full text link
    © 2014 IEEE. Denoising recovery algorithms are very important for the development of compressed sensing (CS) theory and its applications. Considering the noise present in both the original sparse signal x and the compressive measurements y, we propose a novel denoising recovery algorithm, named Regularized Subspace Pursuit (RSP). Firstly, by introducing a data pre-processing operation, the proposed algorithm alleviates the noise-folding effect caused by the noise added to x. Then, the indices of the nonzero elements in x are identified by regularizing the chosen columns of the measurement matrix. Afterwards, the chosen indices are updated by retaining only the largest entries in the Minimum Mean Square Error (MMSE) estimated signal. Simulation results show that, compared with the traditional orthogonal matching pursuit (OMP) algorithm, the proposed RSP algorithm increases the successful recovery rate (and reduces the reconstruction error) by up to 50% and 86% (35% and 65%) in high noise level scenarios and inadequate measurements scenarios, respectively

    Acquisition of Multi-Band Signals via Compressed Sensing

    Get PDF

    Compressive Sensing of Multiband Spectrum towards Real-World Wideband Applications.

    Get PDF
    PhD Theses.Spectrum scarcity is a major challenge in wireless communication systems with their rapid evolutions towards more capacity and bandwidth. The fact that the real-world spectrum, as a nite resource, is sparsely utilized in certain bands spurs the proposal of spectrum sharing. In wideband scenarios, accurate real-time spectrum sensing, as an enabler of spectrum sharing, can become ine cient as it naturally requires the sampling rate of the analog-to-digital conversion to exceed the Nyquist rate, which is resourcecostly and energy-consuming. Compressive sensing techniques have been applied in wideband spectrum sensing to achieve sub-Nyquist-rate sampling of frequency sparse signals to alleviate such burdens. A major challenge of compressive spectrum sensing (CSS) is the complexity of the sparse recovery algorithm. Greedy algorithms achieve sparse recovery with low complexity but the required prior knowledge of the signal sparsity. A practical spectrum sparsity estimation scheme is proposed. Furthermore, the dimension of the sparse recovery problem is proposed to be reduced, which further reduces the complexity and achieves signal denoising that promotes recovery delity. The robust detection of incumbent radio is also a fundamental problem of CSS. To address the energy detection problem in CSS, the spectrum statistics of the recovered signals are investigated and a practical threshold adaption scheme for energy detection is proposed. Moreover, it is of particular interest to seek the challenges and opportunities to implement real-world CSS for systems with large bandwidth. Initial research on the practical issues towards the real-world realization of wideband CSS system based on the multicoset sampler architecture is presented. In all, this thesis provides insights into two critical challenges - low-complexity sparse recovery and robust energy detection - in the general CSS context, while also looks into some particular issues towards the real-world CSS implementation based on the i multicoset sampler

    Compressive Sensing Applications in Measurement: Theoretical issues, algorithm characterization and implementation

    Get PDF
    At its core, signal acquisition is concerned with efficient algorithms and protocols capable to capture and encode the signal information content. For over five decades, the indisputable theoretical benchmark has been represented by the wellknown Shannon’s sampling theorem, and the corresponding notion of information has been indissolubly related to signal spectral bandwidth. The contemporary society is founded on almost instantaneous exchange of information, which is mainly conveyed in a digital format. Accordingly, modern communication devices are expected to cope with huge amounts of data, in a typical sequence of steps which comprise acquisition, processing and storage. Despite the continual technological progress, the conventional acquisition protocol has come under mounting pressure and requires a computational effort not related to the actual signal information content. In recent years, a novel sensing paradigm, also known as Compressive Sensing, briefly CS, is quickly spreading among several branches of Information Theory. It relies on two main principles: signal sparsity and incoherent sampling, and employs them to acquire the signal directly in a condensed form. The sampling rate is related to signal information rate, rather than to signal spectral bandwidth. Given a sparse signal, its information content can be recovered even fromwhat could appear to be an incomplete set of measurements, at the expense of a greater computational effort at reconstruction stage. My Ph.D. thesis builds on the field of Compressive Sensing and illustrates how sparsity and incoherence properties can be exploited to design efficient sensing strategies, or to intimately understand the sources of uncertainty that affect measurements. The research activity has dealtwith both theoretical and practical issues, inferred frommeasurement application contexts, ranging fromradio frequency communications to synchrophasor estimation and neurological activity investigation. The thesis is organised in four chapters whose key contributions include: • definition of a general mathematical model for sparse signal acquisition systems, with particular focus on sparsity and incoherence implications; • characterization of the main algorithmic families for recovering sparse signals from reduced set of measurements, with particular focus on the impact of additive noise; • implementation and experimental validation of a CS-based algorithmfor providing accurate preliminary information and suitably preprocessed data for a vector signal analyser or a cognitive radio application; • design and characterization of a CS-based super-resolution technique for spectral analysis in the discrete Fourier transform(DFT) domain; • definition of an overcomplete dictionary which explicitly account for spectral leakage effect; • insight into the so-called off-the-grid estimation approach, by properly combining CS-based super-resolution and DFT coefficients polar interpolation; • exploration and analysis of sparsity implications in quasi-stationary operative conditions, emphasizing the importance of time-varying sparse signal models; • definition of an enhanced spectral content model for spectral analysis applications in dynamic conditions by means of Taylor-Fourier transform (TFT) approaches

    Low-rank and sparse reconstruction in dynamic magnetic resonance imaging via proximal splitting methods

    Get PDF
    Dynamic magnetic resonance imaging (MRI) consists of collecting multiple MR images in time, resulting in a spatio-temporal signal. However, MRI intrinsically suffers from long acquisition times due to various constraints. This limits the full potential of dynamic MR imaging, such as obtaining high spatial and temporal resolutions which are crucial to observe dynamic phenomena. This dissertation addresses the problem of the reconstruction of dynamic MR images from a limited amount of samples arising from a nuclear magnetic resonance experiment. The term limited can be explained by the approach taken in this thesis to speed up scan time, which is based on violating the Nyquist criterion by skipping measurements that would be normally acquired in a standard MRI procedure. The resulting problem can be classified in the general framework of linear ill-posed inverse problems. This thesis shows how low-dimensional signal models, specifically lowrank and sparsity, can help in the reconstruction of dynamic images from partial measurements. The use of these models are justified by significant developments in signal recovery techniques from partial data that have emerged in recent years in signal processing. The major contributions of this thesis are the development and characterisation of fast and efficient computational tools using convex low-rank and sparse constraints via proximal gradient methods, the development and characterisation of a novel joint reconstruction–separation method via the low-rank plus sparse matrix decomposition technique, and the development and characterisation of low-rank based recovery methods in the context of dynamic parallel MRI. Finally, an additional contribution of this thesis is to formulate the various MR image reconstruction problems in the context of convex optimisation to develop algorithms based on proximal splitting methods

    From spline wavelet to sampling theory on circulant graphs and beyond– conceiving sparsity in graph signal processing

    Get PDF
    Graph Signal Processing (GSP), as the field concerned with the extension of classical signal processing concepts to the graph domain, is still at the beginning on the path toward providing a generalized theory of signal processing. As such, this thesis aspires to conceive the theory of sparse representations on graphs by traversing the cornerstones of wavelet and sampling theory on graphs. Beginning with the novel topic of graph spline wavelet theory, we introduce families of spline and e-spline wavelets, and associated filterbanks on circulant graphs, which lever- age an inherent vanishing moment property of circulant graph Laplacian matrices (and their parameterized generalizations), for the reproduction and annihilation of (exponen- tial) polynomial signals. Further, these families are shown to provide a stepping stone to generalized graph wavelet designs with adaptive (annihilation) properties. Circulant graphs, which serve as building blocks, facilitate intuitively equivalent signal processing concepts and operations, such that insights can be leveraged for and extended to more complex scenarios, including arbitrary undirected graphs, time-varying graphs, as well as associated signals with space- and time-variant properties, all the while retaining the focus on inducing sparse representations. Further, we shift from sparsity-inducing to sparsity-leveraging theory and present a novel sampling and graph coarsening framework for (wavelet-)sparse graph signals, inspired by Finite Rate of Innovation (FRI) theory and directly building upon (graph) spline wavelet theory. At its core, the introduced Graph-FRI-framework states that any K-sparse signal residing on the vertices of a circulant graph can be sampled and perfectly reconstructed from its dimensionality-reduced graph spectral representation of minimum size 2K, while the structure of an associated coarsened graph is simultaneously inferred. Extensions to arbitrary graphs can be enforced via suitable approximation schemes. Eventually, gained insights are unified in a graph-based image approximation framework which further leverages graph partitioning and re-labelling techniques for a maximally sparse graph wavelet representation.Open Acces

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Temperature aware power optimization for multicore floating-point units

    Full text link

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others

    Bayesian inversion in biomedical imaging

    Full text link
    Biomedizinische Bildgebung ist zu einer Schlüsseltechnik geworden, Struktur oder Funktion lebender Organismen nicht-invasiv zu untersuchen. Relevante Informationen aus den gemessenen Daten zu rekonstruieren erfordert neben mathematischer Modellierung und numerischer Simulation das verlässliche Lösen schlecht gestellter inverser Probleme. Um dies zu erreichen müssen zusätzliche a-priori Informationen über die zu rekonstruierende Größe formuliert und in die algorithmischen Lösungsverfahren einbezogen werden. Bayesianische Invertierung ist eine spezielle mathematische Methodik dies zu tun. Die vorliegende Arbeit entwickelt eine aktuelle Übersicht Bayesianischer Invertierung und demonstriert die vorgestellten Konzepte und Algorithmen in verschiedenen numerischen Studien, darunter anspruchsvolle Anwendungen aus der biomedizinischen Bildgebung mit experimentellen Daten. Ein Schwerpunkt liegt dabei auf der Verwendung von Dünnbesetztheit/Sparsity als a-priori Information.Biomedical imaging techniques became a key technology to assess the structure or function of living organisms in a non-invasive way. Besides innovations in the instrumentation, the development of new and improved methods for processing and analysis of the measured data has become a vital field of research. Building on traditional signal processing, this area nowadays also comprises mathematical modeling, numerical simulation and inverse problems. The latter describes the reconstruction of quantities of interest from measured data and a given generative model. Unfortunately, most inverse problems are ill-posed, which means that a robust and reliable reconstruction is not possible unless additional a-priori information on the quantity of interest is incorporated into the solution method. Bayesian inversion is a mathematical methodology to formulate and employ a-priori information in computational schemes to solve the inverse problem. This thesis develops a recent overview on Bayesian inversion and exemplifies the presented concepts and algorithms in various numerical studies including challenging biomedical imaging applications with experimental data. A particular focus is on using sparsity as a-priori information within the Bayesian framework. <br
    corecore