141 research outputs found

    Message-Passing Estimation from Quantized Samples

    Full text link
    Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal -- sometimes greatly so. This paper develops generalized approximate message passing (GAMP) algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular. GAMP is a recently-developed class of algorithms that uses Gaussian approximations in belief propagation and allows arbitrary separable input and output channels. Scalar quantization of measurements is incorporated into the output channel formalism, leading to the first tractable and effective method for high-dimensional estimation problems involving non-regular scalar quantization. Non-regular quantization is empirically demonstrated to greatly improve rate-distortion performance in some problems with oversampling or with undersampling combined with a sparsity-inducing prior. Under the assumption of a Gaussian measurement matrix with i.i.d. entries, the asymptotic error performance of GAMP can be accurately predicted and tracked through the state evolution formalism. We additionally use state evolution to design MSE-optimal scalar quantizers for GAMP signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers.Comment: 12 pages, 8 figure

    Learning to compress and search visual data in large-scale systems

    Full text link
    The problem of high-dimensional and large-scale representation of visual data is addressed from an unsupervised learning perspective. The emphasis is put on discrete representations, where the description length can be measured in bits and hence the model capacity can be controlled. The algorithmic infrastructure is developed based on the synthesis and analysis prior models whose rate-distortion properties, as well as capacity vs. sample complexity trade-offs are carefully optimized. These models are then extended to multi-layers, namely the RRQ and the ML-STC frameworks, where the latter is further evolved as a powerful deep neural network architecture with fast and sample-efficient training and discrete representations. For the developed algorithms, three important applications are developed. First, the problem of large-scale similarity search in retrieval systems is addressed, where a double-stage solution is proposed leading to faster query times and shorter database storage. Second, the problem of learned image compression is targeted, where the proposed models can capture more redundancies from the training images than the conventional compression codecs. Finally, the proposed algorithms are used to solve ill-posed inverse problems. In particular, the problems of image denoising and compressive sensing are addressed with promising results.Comment: PhD thesis dissertatio

    Adapted Compressed Sensing: A Game Worth Playing

    Get PDF
    Despite the universal nature of the compressed sensing mechanism, additional information on the class of sparse signals to acquire allows adjustments that yield substantial improvements. In facts, proper exploitation of these priors allows to significantly increase compression for a given reconstruction quality. Since one of the most promising scopes of application of compressed sensing is that of IoT devices subject to extremely low resource constraint, adaptation is especially interesting when it can cope with hardware-related constraint allowing low complexity implementations. We here review and compare many algorithmic adaptation policies that focus either on the encoding part or on the recovery part of compressed sensing. We also review other more hardware-oriented adaptation techniques that are actually able to make the difference when coming to real-world implementations. In all cases, adaptation proves to be a tool that should be mastered in practical applications to unleash the full potential of compressed sensing

    Metodi Matriciali per l'Acquisizione Efficiente e la Crittografia di Segnali in Forma Compressa

    Get PDF
    The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-PĂ©rot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager

    Stochastic Optimization and Machine Learning Modeling for Wireless Networking

    Get PDF
    In the last years, the telecommunications industry has seen an increasing interest in the development of advanced solutions that enable communicating nodes to exchange large amounts of data. Indeed, well-known applications such as VoIP, audio streaming, video on demand, real-time surveillance systems, safety vehicular requirements, and remote computing have increased the demand for the efficient generation, utilization, management and communication of larger and larger data quantities. New transmission technologies have been developed to permit more efficient and faster data exchanges, including multiple input multiple output architectures or software defined networking: as an example, the next generation of mobile communication, known as 5G, is expected to provide data rates of tens of megabits per second for tens of thousands of users and only 1 ms latency. In order to achieve such demanding performance, these systems need to effectively model the considerable level of uncertainty related to fading transmission channels, interference, or the presence of noise in the data. In this thesis, we will present how different approaches can be adopted to model these kinds of scenarios, focusing on wireless networking applications. In particular, the first part of this work will show how stochastic optimization models can be exploited to design energy management policies for wireless sensor networks. Traditionally, transmission policies are designed to reduce the total amount of energy drawn from the batteries of the devices; here, we consider energy harvesting wireless sensor networks, in which each device is able to scavenge energy from the environment and charge its battery with it. In this case, the goal of the optimal transmission policies is to efficiently manage the energy harvested from the environment, avoiding both energy outage (i.e., no residual energy in a battery) and energy overflow (i.e., the impossibility to store scavenged energy when the battery is already full). In the second part of this work, we will explore the adoption of machine learning techniques to tackle a number of common wireless networking problems. These algorithms are able to learn from and make predictions on data, avoiding the need to follow limited static program instructions: models are built from sample inputs, thus allowing for data-driven predictions and decisions. In particular, we will first design an on-the-fly prediction algorithm for the expected time of arrival related to WiFi transmissions. This predictor only exploits those network parameters available at each receiving node and does not require additional knowledge from the transmitter, hence it can be deployed without modifying existing standard transmission protocols. Secondly, we will investigate the usage of particular neural network instances known as autoencoders for the compression of biosignals, such as electrocardiography and photo plethysmographic sequences. A lightweight lossy compressor will be designed, able to be deployed in wearable battery-equipped devices with limited computational power. Thirdly, we will propose a predictor for the long-term channel gain in a wireless network. Differently from other works in the literature, such predictor will only exploit past channel samples, without resorting to additional information such as GPS data. An accurate estimation of this gain would enable to, e.g., efficiently allocate resources and foretell future handover procedures. Finally, although not strictly related to wireless networking scenarios, we will show how deep learning techniques can be applied to the field of autonomous driving. This final section will deal with state-of-the-art machine learning solutions, proving how these techniques are able to considerably overcome the performance given by traditional approaches

    Exploring information retrieval using image sparse representations:from circuit designs and acquisition processes to specific reconstruction algorithms

    Get PDF
    New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined
    • …
    corecore