129 research outputs found

    Augmented Slepians: Bandlimited Functions that Counterbalance Energy in Selected Intervals

    Full text link
    Slepian functions provide a solution to the optimization problem of joint time-frequency localization. Here, this concept is extended by using a generalized optimization criterion that favors energy concentration in one interval while penalizing energy in another interval, leading to the "augmented" Slepian functions. Mathematical foundations together with examples are presented in order to illustrate the most interesting properties that these generalized Slepian functions show. Also the relevance of this novel energy-concentration criterion is discussed along with some of its applications

    Asynchronous Representation and Processing of Analog Sparse Signals Using a Time-Scale Framework

    Get PDF
    In this dissertation we investigate the problem of asynchronous representation and processing of analog sparse signals using a time-scale framework. Recently, in the design of signal representations the focus has been on the use of application-driven constraints for optimality purposes. Appearing in many fields such as neuroscience, implantable biomedical diagnostic devices, and sensor network applications, sparse or burst--like signals are of great interest. A common challenge in the representation of such signals is that they exhibit non--stationary behavior with frequency--varying spectra. By ignoring that the maximum frequency of their spectra is changing with time, uniformly sampling sparse signals collects samples in quiescent segments and results in high power dissipation. Also, continuous monitoring of signals challenges data acquisition, storage, and processing; especially if remote monitoring is desired, as this would require that a large number of samples be generated, stored and transmitted. Power consumption and the type of processing imposed by the size of the devices in the aforementioned applications has motivated the use of asynchronous approaches in our research. First, we work on establishing a new paradigm for the representation of analog sparse signals using a time-frequency representation. Second, we develop a scale-based signal decomposition framework which uses filter-bank structures for the representation-analysis-compression scheme of the sparse information. Using an asynchronous signal decomposition scheme leads to reduced computational requirements and lower power consumption; thus it is promising for hardware implementation. In addition, the proposed algorithm does not require prior knowledge of the bandwidth of the signal and the effect of noise can still be alleviated. Finally, we consider the synthesis step, where the target signal is reconstructed from compressed data. We implement a perfect reconstruction filter bank based on Slepian wavelets to use in the reconstruction of sparse signals from non--uniform samples. In this work, experiments on primary biomedical signal applications, such as electrocardiogram (EEG), swallowing signals and heart sound recordings have achieved significant improvements over traditional methods in the sensing and processing of sparse data. The results are also promising in applications including compression and denoising

    Quantization in acquisition and computation networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-165).In modern systems, it is often desirable to extract relevant information from large amounts of data collected at different spatial locations. Applications include sensor networks, wearable health-monitoring devices and a variety of other systems for inference. Several existing source coding techniques, such as Slepian-Wolf and Wyner-Ziv coding, achieve asymptotic compression optimality in distributed systems. However, these techniques are rarely used in sensor networks because of decoding complexity and prohibitively long code length. Moreover, the fundamental limits that arise from existing techniques are intractable to describe for a complicated network topology or when the objective of the system is to perform some computation on the data rather than to reproduce the data. This thesis bridges the technological gap between the needs of real-world systems and the optimistic bounds derived from asymptotic analysis. Specifically, we characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one. To obtain both performance guarantees and achievable schemes, we use high-resolution quantization theory, which is complementary to the Shannon-theoretic analyses previously used to study distributed systems. We account for varied network topologies, such as those where sensors are allowed to collaborate or the communication links are heterogeneous. In these settings, a small amount of intersensor communication can provide a significant improvement in compression performance. As a result, this work suggests new compression principles and network design for modern distributed systems. Although the ideas in the thesis are motivated by current and future sensor network implementations, the framework applies to a wide range of signal processing questions. We draw connections between the fidelity criteria studied in the thesis and distortion measures used in perceptual coding. As a consequence, we determine the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community. We further demonstrate that applying the ERE criterion to psychophysical models can explain the Weber-Fechner law, a longstanding hypothesis of how humans perceive the external world. Our results are consistent with the hypothesis that human perception is Bayesian optimal for information acquisition conditioned on limited cognitive resources, thereby supporting the notion that the brain is efficient at acquisition and adaptation.by John Z. Sun.Ph.D

    Slepian Wavelets for the Analysis of Incomplete Data on Manifolds

    Get PDF
    Many fields in science and engineering measure data that inherently live on non-Euclidean geometries, such as the sphere. Techniques developed in the Euclidean setting must be extended to other geometries. Due to recent interest in geometric deep learning, analogues of Euclidean techniques must also handle general manifolds or graphs. Often, data are only observed over partial regions of manifolds, and thus standard whole-manifold techniques may not yield accurate predictions. In this thesis, a new wavelet basis is designed for datasets like these. Although many definitions of spherical convolutions exist, none fully emulate the Euclidean definition. A novel spherical convolution is developed, designed to tackle the shortcomings of existing methods. The so-called sifting convolution exploits the sifting property of the Dirac delta and follows by the inner product of a function with the translated version of another. This translation operator is analogous to the Euclidean translation in harmonic space and exhibits some useful properties. In particular, the sifting convolution supports directional kernels; has an output that remains on the sphere; and is efficient to compute. The convolution is entirely generic and thus may be used with any set of basis functions. An application of the sifting convolution with a topographic map of the Earth demonstrates that it supports directional kernels to perform anisotropic filtering. Slepian wavelets are built upon the eigenfunctions of the Slepian concentration problem of the manifold - a set of bandlimited functions which are maximally concentrated within a given region. Wavelets are constructed through a tiling of the Slepian harmonic line by leveraging the existing scale-discretised framework. A straightforward denoising formalism demonstrates a boost in signal-to-noise for both a spherical and general manifold example. Whilst these wavelets were inspired by spherical datasets, like in cosmology, the wavelet construction may be utilised for manifold or graph data

    Fast Algorithms for Sampled Multiband Signals

    Get PDF
    Over the past several years, computational power has grown tremendously. This has led to two trends in signal processing. First, signal processing problems are now posed and solved using linear algebra, instead of traditional methods such as filtering and Fourier transforms. Second, problems are dealing with increasingly large amounts of data. Applying tools from linear algebra to large scale problems requires the problem to have some type of low-dimensional structure which can be exploited to perform the computations efficiently. One common type of signal with a low-dimensional structure is a multiband signal, which has a sparsely supported Fourier transform. Transferring this low-dimensional structure from the continuous-time signal to the discrete-time samples requires care. Naive approaches involve using the FFT, which suffers from spectral leakage. A more suitable method to exploit this low-dimensional structure involves using the Slepian basis vectors, which are useful in many problems due to their time-frequency localization properties. However, prior to this research, no fast algorithms for working with the Slepian basis had been developed. As such, practitioners often overlooked the Slepian basis vectors for more computationally efficient tools, such as the FFT, even in problems for which the Slepian basis vectors are a more appropriate tool. In this thesis, we first study the mathematical properties of the Slepian basis, as well as the closely related discrete prolate spheroidal sequences and prolate spheroidal wave functions. We then use these mathematical properties to develop fast algorithms for working with the Slepian basis, a fast algorithm for reconstructing a multiband signal from nonuniform measurements, and a fast algorithm for reconstructing a multiband signal from compressed measurements. The runtime and memory requirements for all of our fast algorithms scale roughly linearly with the number of samples of the signal.Ph.D

    Efficient decentralized communications in sensor networks

    Get PDF
    This thesis is concerned with problems in decentralized communication in large networks. Namely, we address the problems of joint rate allocation and transmission of data sources measured at nodes, and of controlling the multiple access of sources to a shared medium. In our study, we consider in particular the important case of a sensor network measuring correlated data. In the first part of this thesis, we consider the problem of correlated data gathering by a network with a sink node and a tree communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. Two coding strategies are analyzed: a Slepian-Wolf model where optimal coding is complex and transmission optimization is simple, and a joint entropy coding model with explicit communication where coding is simple and transmission optimization is difficult. This problem requires a joint optimization of the rate allocation at the nodes and of the transmission structure. For the Slepian-Wolf setting, we derive a closed form solution and an efficient distributed approximation algorithm with a good performance. We generalize our results to the case of multiple sinks. For the explicit communication case, we prove that building an optimal data gathering tree is NP-complete and we propose various distributed approximation algorithms. We compare asymptotically, for dense networks, the total costs associated with Slepian-Wolf coding and explicit communication, by finding their corresponding scaling laws and analyzing the ratio of their respective costs. We argue that, for large networks and under certain conditions on the correlation structure, "intelligent", but more complex Slepian-Wolf coding provides unbounded gains over the widely used straightforward approach of opportunistic aggregation and compression by explicit communication. In the second part of this thesis, we consider a queuing problem in which the service rate of a queue is a function of a partially observed Markov chain, and in which the arrivals are controlled based on those partial observations so as to keep the system in a desirable mildly unstable regime. The optimal controller for this problem satisfies a separation property: we first compute a probability measure on the state space of the chain, namely the information state, then use this measure as the new state based on which to make control decisions. We give a formal description of the system considered and of its dynamics, we formalize and solve an optimal control problem, and we show numerical simulations to illustrate with concrete examples properties of the optimal control law. We show how the ergodic behavior of our queuing model is characterized by an invariant measure over all possible information states, and we construct that measure. Our results may be applied for designing efficient and stable algorithms for medium access control in multiple accessed systems, in particular for sensor networks

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
    • …
    corecore