462 research outputs found

    Structured Compressed Sensing: From Theory to Applications

    Full text link
    Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.Comment: To appear as an overview paper in IEEE Transactions on Signal Processin

    Rake, Peel, Sketch:The Signal Processing Pipeline Revisited

    Get PDF
    The prototypical signal processing pipeline can be divided into four blocks. Representation of the signal in a basis suitable for processing. Enhancement of the meaningful part of the signal and noise reduction. Estimation of important statistical properties of the signal. Adaptive processing to track and adapt to changes in the signal statistics. This thesis revisits each of these blocks and proposes new algorithms, borrowing ideas from information theory, theoretical computer science, or communications. First, we revisit the Walsh-Hadamard transform (WHT) for the case of a signal sparse in the transformed domain, namely that has only K †N non-zero coefficients. We show that an efficient algorithm exists that can compute these coefficients in O(K log2(K) log2(N/K)) and using only O(K log2(N/K)) samples. This algorithm relies on a fast hashing procedure that computes small linear combinations of transformed domain coefficients. A bipartite graph is formed with linear combinations on one side, and non-zero coefficients on the other. A peeling decoder is then used to recover the non-zero coefficients one by one. A detailed analysis of the algorithm based on error correcting codes over the binary erasure channel is given. The second chapter is about beamforming. Inspired by the rake receiver from wireless communications, we recognize that echoes in a room are an important source of extra signal diversity. We extend several classic beamforming algorithms to take advantage of echoes and also propose new optimal formulations. We explore formulations both in time and frequency domains. We show theoretically and in numerical simulations that the signal-to-interference-and-noise ratio increases proportionally to the number of echoes used. Finally, beyond objective measures, we show that echoes also directly improve speech intelligibility as measured by the perceptual evaluation of speech quality (PESQ) metric. Next, we attack the problem of direction of arrival of acoustic sources, to which we apply a robust finite rate of innovation reconstruction framework. FRIDA â the resulting algorithm â exploits wideband information coherently, works at very low signal-to-noise ratio, and can resolve very close sources. The algorithm can use either raw microphone signals or their cross- correlations. While the former lets us work with correlated sources, the latter creates a quadratic number of measurements that allows to locate many sources with few microphones. Thorough experiments on simulated and recorded data shows that FRIDA compares favorably with the state-of-the-art. We continue by revisiting the classic recursive least squares (RLS) adaptive filter with ideas borrowed from recent results on sketching least squares problems. The exact update of RLS is replaced by a few steps of conjugate gradient descent. We propose then two different precondi- tioners, obtained by sketching the data, to accelerate the convergence of the gradient descent. Experiments on artificial as well as natural signals show that the proposed algorithm has a performance very close to that of RLS at a lower computational burden. The fifth and final chapter is dedicated to the software and hardware tools developed for this thesis. We describe the pyroomacoustics Python package that contains routines for the evaluation of audio processing algorithms and reference implementations of popular algorithms. We then give an overview of the microphone arrays developed

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Visual Techniques for Geological Fieldwork Using Mobile Devices

    Get PDF
    Visual techniques in general and 3D visualisation in particular have seen considerable adoption within the last 30 years in the geosciences and geology. Techniques such as volume visualisation, for analysing subsurface processes, and photo-coloured LiDAR point-based rendering, to digitally explore rock exposures at the earth’s surface, were applied within geology as one of the first adopting branches of science. A large amount of digital, geological surface- and volume data is nowadays available to desktop-based workflows for geological applications such as hydrocarbon reservoir exploration, groundwater modelling, CO2 sequestration and, in the future, geothermal energy planning. On the other hand, the analysis and data collection during fieldwork has yet to embrace this ”digital revolution”: sedimentary logs, geological maps and stratigraphic sketches are still captured in each geologist’s individual fieldbook, and physical rocks samples are still transported to the lab for subsequent analysis. Is this still necessary, or are there extended digital means of data collection and exploration in the field ? Are modern digital interpretation techniques accurate and intuitive enough to relevantly support fieldwork in geology and other geoscience disciplines ? This dissertation aims to address these questions and, by doing so, close the technological gap between geological fieldwork and office workflows in geology. The emergence of mobile devices and their vast array of physical sensors, combined with touch-based user interfaces, high-resolution screens and digital cameras provide a possible digital platform that can be used by field geologists. Their ubiquitous availability increases the chances to adopt digital workflows in the field without additional, expensive equipment. The use of 3D data on mobile devices in the field is furthered by the availability of 3D digital outcrop models and the increasing ease of their acquisition. This dissertation assesses the prospects of adopting 3D visual techniques and mobile devices within field geology. The research of this dissertation uses previously acquired and processed digital outcrop models in the form of textured surfaces from optical remote sensing and photogrammetry. The scientific papers in this thesis present visual techniques and algorithms to map outcrop photographs in the field directly onto the surface models. Automatic mapping allows the projection of photo interpretations of stratigraphy and sedimentary facies on the 3D textured surface while providing the domain expert with simple-touse, intuitive tools for the photo interpretation itself. The developed visual approach, combining insight from all across the computer sciences dealing with visual information, merits into the mobile device Geological Registration and Interpretation Toolset (GRIT) app, which is assessed on an outcrop analogue study of the Saltwick Formation exposed at Whitby, North Yorkshire, UK. Although being applicable to a diversity of study scenarios within petroleum geology and the geosciences, the particular target application of the visual techniques is to easily provide field-based outcrop interpretations for subsequent construction of training images for multiple point statistics reservoir modelling, as envisaged within the VOM2MPS project. Despite the success and applicability of the visual approach, numerous drawbacks and probable future extensions are discussed in the thesis based on the conducted studies. Apart from elaborating on more obvious limitations originating from the use of mobile devices and their limited computing capabilities and sensor accuracies, a major contribution of this thesis is the careful analysis of conceptual drawbacks of established procedures in modelling, representing, constructing and disseminating the available surface geometry. A more mathematically-accurate geometric description of the underlying algebraic surfaces yields improvements and future applications unaddressed within the literature of geology and the computational geosciences to this date. Also, future extensions to the visual techniques proposed in this thesis allow for expanded analysis, 3D exploration and improved geological subsurface modelling in general.publishedVersio

    Enabling human physiological sensing by leveraging intelligent head-worn wearable systems

    Get PDF
    This thesis explores the challenges of enabling human physiological sensing by leveraging head-worn wearable computer systems. In particular, we want to answer a fundamental question, i.e., could we leverage head-worn wearables to enable accurate and socially-acceptable solutions to improve human healthcare and prevent life-threatening conditions in our daily lives? To that end, we will study the techniques that utilise the unique advantages of wearable computers to (1) facilitate new sensing capabilities to capture various biosignals from the brain, the eyes, facial muscles, sweat glands, and blood vessels, (2) address motion artefacts and environmental noise in real-time with signal processing algorithms and hardware design techniques, and (3) enable long-term, high-fidelity biosignal monitoring with efficient on-chip intelligence and pattern-driven compressive sensing algorithms. We first demonstrate the ability to capture the activities of the user's brain, eyes, facial muscles, and sweat glands by proposing WAKE, a novel behind-the-ear biosignal sensing wearable. By studying the human anatomy in the ear area, we propose a wearable design to capture brain waves (EEG), eye movements (EOG), facial muscle contractions (EMG), and sweat gland activities (EDA) with a minimal number of sensors. Furthermore, we introduce a Three-fold Cascaded Amplifying (3CA) technique and signal processing algorithms to tame the motion artefacts and environmental noises for capturing high-fidelity signals in real time. We devise a machine-learning model based on the captured signals to detect microsleep with a high temporal resolution. Second, we will discuss our work on developing an efficient Pattern-dRiven Compressive Sensing framework (PROS) to enable long-term biosignal monitoring on low-power wearables. The system introduces tiny on-chip pattern recognition primitives (TinyPR) and a novel pattern-driven compressive sensing technique (PDCS) that exploits the sparsity of biosignals. They provide the ability to capture high-fidelity biosignals with an ultra-low power footprint. This development will unlock long-term healthcare applications on wearable computers, such as epileptic seizure monitoring, microsleep detection, etc. These applications were previously impractical on energy and resource-constrained wearable computers due to the limited battery lifetime, slow response rate, and inadequate biosignal quality. Finally, we will further explore the possibility of capturing the activities of a blood vessel (i.e., superficial temporal artery) lying deep inside the user's ear using an ear-worn wearable computer. The captured optical pulse signals (PPG) are used to develop a frequent and comfortable blood pressure monitoring system called eBP. In contrast to existing devices, eBP introduces a novel in-ear wearable system design and algorithms to eliminate the need to block the blood flow inside the ear, alleviating the user's discomfort

    Scalable Learning Adaptive to Unknown Dynamics and Graphs

    Get PDF
    University of Minnesota Ph.D. dissertation.June 2019. Major: Electrical/Computer Engineering. Advisor: Georgios B. Giannakis. 1 computer file (PDF); xii, 174 pages.With the scale of information growing every day, the key challenges in machine learning include the high-dimensionality and sheer volume of feature vectors that may consist of real and categorical data, as well as the speed and the typically streaming format of data acquisition that may also entail outliers and misses. The latter may be present, either unintentionally or intentionally, in order to cope with scalability, privacy, and adversarial behavior. These challenges provide ample opportunities for algorithmic and analytical innovations in online and nonlinear subspace learning approaches. Among the available nonlinear learning tools, those based on kernels have merits that are well documented. However, most rely on a preselected kernel, whose prudent choice presumes task-specific prior information that is generally not available. It is also known that kernel-based methods do not scale well with the size or dimensionality of the data at hand. Besides data science, the urgent need for scalable tools is a core issue also in network science that has recently emerged as a means of collectively understanding the behavior of complex interconnected entities. The rich spectrum of application domains comprises communication, social, financial, gene-regulatory, brain, and power networks, to name a few. Prominent tasks in all network science applications are those of topology identification and inference of nodal processes evolving over graphs. Most contemporary graph-driven inference approaches rely on linear and static models that are simple and tractable, but also presume that the nodal processes are directly observable. To cope with these challenges, the present thesis first introduces a novel online categorical subspace learning approach to track the latent structure of categorical data `on the fly.' Leveraging the random feature approximation, it then develops an adaptive online multi-kernel learning approach (termed AdaRaker), which accounts not only for data-driven learning of the kernel combination, but also for the unknown dynamics. Performance analysis is provided in terms of both static and dynamic regrets to quantify the novel learning function approximation. In addition, the thesis introduces a kernel-based topology identification approach that can even account for nonlinear dependencies among nodes and across time. To cope with nodal processes that may not be directly observable in certain applications, tensor-based algorithms that leverage piecewise stationary statistics of nodal processes are developed, and pertinent identifiability conditions are established. To facilitate real-time operation and inference of time-varying networks, an adaptive tensor decomposition based scheme is put forth to track the topologies of time-varying networks. Last but not least, the present thesis offers a unifying framework to deal with various learning tasks over possibly dynamic networks. These tasks include dimensionality reduction, classification, and clustering. Tests on both synthetic and real datasets from the aforementioned application domains are carried out to showcase the effectiveness of the novel algorithms throughout

    Universality Laws and Performance Analysis of the Generalized Linear Models

    Get PDF
    In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, etc.) from noisy linear or non-linear measurements in a variety of applications in genomics, signal processing, wireless communications, machine learning, etc.. Taking advantage of the particular structure of the unknown signal of interest is critical since in most of these applications, the dimension p of the signal to be estimated is comparable, or even larger than the number of observations n. With the advent of Compressive Sensing there has been a very large number of theoretical results that study the estimation performance of non-smooth convex optimization in such a high-dimensional setting. A popular approach for estimating an unknown signal β₀ ϵ ℝᵖ in a generalized linear model, with observations y = g(Xβ₀) ϵ ℝⁿ, is via solving the estimator β&#x0302; = arg minβ L(y, Xβ + λf(β). Here, L(•,•) is a loss function which is convex with respect to its second argument, and f(•) is a regularizer that enforces the structure of the unknown β₀. We first analyze the generalization error performance of this estimator, for the case where the entries of X are drawn independently from real standard Gaussian distribution. The precise nature of our analysis permits an accurate performance comparison between different instances of these estimators, and allows to optimally tune the hyperparameters based on the model parameters. We apply our result to some of the most popular cases of generalized linear models, such as M-estimators in linear regression, logistic regression and generalized margin maximizers in binary classification problems, and Poisson regression in count data models. The key ingredient of our proof is the Convex Gaussian Min-max Theorem (CGMT), which is a tight version of the Gaussian comparison inequality proved by Gordon in 1988. Unfortunately, having real iid entries in the features matrix X is crucial in this theorem, and it cannot be naturally extended to other cases. But for some special cases, we prove some universality properties and indirectly extend these results to more general designs of the features matrix X, where the entries are not necessarily real, independent, or identically distributed. This extension, enables us to analyze problems that CGMT was incapable of, such as models with quadratic measurements, phase-lift in phase retrieval, and data recovery in massive MIMO, and help us settle a few long standing open problems in these areas.</p
    corecore