728 research outputs found

    Extended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices

    Get PDF

    Variational Bayesian algorithm for quantized compressed sensing

    Full text link
    Compressed sensing (CS) is on recovery of high dimensional signals from their low dimensional linear measurements under a sparsity prior and digital quantization of the measurement data is inevitable in practical implementation of CS algorithms. In the existing literature, the quantization error is modeled typically as additive noise and the multi-bit and 1-bit quantized CS problems are dealt with separately using different treatments and procedures. In this paper, a novel variational Bayesian inference based CS algorithm is presented, which unifies the multi- and 1-bit CS processing and is applicable to various cases of noiseless/noisy environment and unsaturated/saturated quantizer. By decoupling the quantization error from the measurement noise, the quantization error is modeled as a random variable and estimated jointly with the signal being recovered. Such a novel characterization of the quantization error results in superior performance of the algorithm which is demonstrated by extensive simulations in comparison with state-of-the-art methods for both multi-bit and 1-bit CS problems.Comment: Accepted by IEEE Trans. Signal Processing. 10 pages, 6 figure

    Democracy in action: Quantization, saturation, and compressive sensing

    Get PDF
    We explore and exploit a heretofore relatively unexplored hallmark of compressive sensing (CS), the fact that certain CS measurement systems are democratic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using this property, we re-think how to quantize the compressive measurements. In Shannon-Nyquist sampling, we scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors. In stark contrast, we demonstrate a CS system achieves the best performance when we operate at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain

    Recovery of surface orientation from diffuse polarization

    Get PDF
    When unpolarized light is reflected from a smooth dielectric surface, it becomes partially polarized. This is due to the orientation of dipoles induced in the reflecting medium and applies to both specular and diffuse reflection. This paper is concerned with exploiting polarization by surface reflection, using images of smooth dielectric objects, to recover surface normals and, hence, height. This paper presents the underlying physics of polarization by reflection, starting with the Fresnel equations. These equations are used to interpret images taken with a linear polarizer and digital camera, revealing the shape of the objects. Experimental results are presented that illustrate that the technique is accurate near object limbs, as the theory predicts, with less precise, but still useful, results elsewhere. A detailed analysis of the accuracy of the technique for a variety of materials is presented. A method for estimating refractive indices using a laser and linear polarizer is also given

    Simulation Studies of Digital Filters for the Phase-II Upgrade of the Liquid-Argon Calorimeters of the ATLAS Detector at the High-Luminosity LHC

    Get PDF
    Am Large Hadron Collider und am ATLAS-Detektor werden umfangreiche Aufrüstungsarbeiten vorgenommen. Diese Arbeiten sind in mehrere Phasen gegliedert und umfassen unter Anderem Änderungen an der Ausleseelektronik der Flüssigargonkalorimeter; insbesondere ist es geplant, während der letzten Phase ihren Primärpfad vollständig auszutauschen. Die Elektronik besteht aus einem analogen und einem digitalen Teil: während ersterer die Signalpulse verstärkt und sie zur leichteren Abtastung verformt, führt letzterer einen Algorithmus zur Energierekonstruktion aus. Beide Teile müssen während der Aufrüstung verbessert werden, damit der Detektor interessante Kollisionsereignisse präzise rekonstruieren und uninteressante effizient verwerfen kann. In dieser Dissertation werden Simulationsstudien präsentiert, die sowohl die analoge als auch die digitale Auslese der Flüssigargonkalorimeter optimieren. Die Korrektheit der Simulation wird mithilfe von Kalibrationsdaten geprüft, die im sog. Run 2 des ATLAS-Detektors aufgenommen worden sind. Der Einfluss verschiedener Parameter der Signalverformung auf die Energieauflösung wird analysiert und die Nützlichkeit einer erhöhten Abtastrate von 80 MHz untersucht. Des Weiteren gibt diese Arbeit eine Übersicht über lineare und nichtlineare Energierekonstruktionsalgorithmen. Schließlich wird eine Auswahl von ihnen hinsichtlich ihrer Leistungsfähigkeit miteinander verglichen. Es wird gezeigt, dass ein Erhöhen der Ordnung des Optimalfilters, der gegenwärtig verwendete Algorithmus, die Energieauflösung um 2 bis 3 % verbessern kann, und zwar in allen Regionen des Detektors. Der Wiener Filter mit Vorwärtskorrektur, ein nichtlinearer Algorithmus, verbessert sie um bis zu 10 % in einigen Regionen, verschlechtert sie aber in anderen. Ein Zusammenhang dieses Verhaltens mit der Wahrscheinlichkeit fälschlich detektierter Kalorimetertreffer wird aufgezeigt und mögliche Lösungen werden diskutiert.:1 Introduction 2 An Overview of High-Energy Particle Physics 2.1 The Standard Model of Particle Physics 2.2 Verification of the Standard Model 2.3 Beyond the Standard Model 3 LHC, ATLAS, and the Liquid-Argon Calorimeters 3.1 The Large Hadron Collider 3.2 The ATLAS Detector 3.3 The ATLAS Liquid-Argon Calorimeters 4 Upgrades to the ATLAS Liquid-Argon Calorimeters 4.1 Physics Goals 4.2 Phase-I Upgrade 4.3 Phase-II Upgrade 5 Noise Suppression With Digital Filters 5.1 Terminology 5.2 Digital Filters 5.3 Wiener Filter 5.4 Matched Wiener Filter 5.5 Matched Wiener Filter Without Bias 5.6 Timing Reconstruction, Optimal Filtering, and Selection Criteria 5.7 Forward Correction 5.8 Sparse Signal Restoration 5.9 Artificial Neural Networks 6 Simulation of the ATLAS Liquid-Argon Calorimeter Readout Electronics 6.1 AREUS 6.2 Hit Generation and Sampling 6.3 Pulse Shapes 6.4 Thermal Noise 6.5 Quantization 6.6 Digital Filters 6.7 Statistical Analysis 7 Results of the Readout Electronics Simulation Studies 7.1 Statistical Treatment 7.2 Simulation Verification Using Run-2 Data 7.3 Dependence of the Noise on the Shaping Time 7.4 The Analog Readout Electronics and the ADC 7.5 The Optimal Filter (OF) 7.6 The Wiener Filter 7.7 The Wiener Filter with Forward Correction (WFFC) 7.8 Final Comparison and Conclusions 8 Conclusions and Outlook AppendicesThe Large Hadron Collider and the ATLAS detector are undergoing a comprehensive upgrade split into multiple phases. This effort also affects the liquid-argon calorimeters, whose main readout electronics will be replaced completely during the final phase. The electronics consist of an analog and a digital portion: the former amplifies the signal and shapes it to facilitate sampling, the latter executes an energy reconstruction algorithm. Both must be improved during the upgrade so that the detector may accurately reconstruct interesting collision events and efficiently suppress uninteresting ones. In this thesis, simulation studies are presented that optimize both the analog and the digital readout of the liquid-argon calorimeters. The simulation is verified using calibration data that has been measured during Run 2 of the ATLAS detector. The influence of several parameters of the analog shaping stage on the energy resolution is analyzed and the utility of an increased signal sampling rate of 80 MHz is investigated. Furthermore, a number of linear and non-linear energy reconstruction algorithms is reviewed and the performance of a selection of them is compared. It is demonstrated that increasing the order of the Optimal Filter, the algorithm currently in use, improves energy resolution by 2 to 3 % in all detector regions. The Wiener filter with forward correction, a non-linear algorithm, gives an improvement of up to 10 % in some regions, but degrades the resolution in others. A link between this behavior and the probability of falsely detected calorimeter hits is shown and possible solutions are discussed.:1 Introduction 2 An Overview of High-Energy Particle Physics 2.1 The Standard Model of Particle Physics 2.2 Verification of the Standard Model 2.3 Beyond the Standard Model 3 LHC, ATLAS, and the Liquid-Argon Calorimeters 3.1 The Large Hadron Collider 3.2 The ATLAS Detector 3.3 The ATLAS Liquid-Argon Calorimeters 4 Upgrades to the ATLAS Liquid-Argon Calorimeters 4.1 Physics Goals 4.2 Phase-I Upgrade 4.3 Phase-II Upgrade 5 Noise Suppression With Digital Filters 5.1 Terminology 5.2 Digital Filters 5.3 Wiener Filter 5.4 Matched Wiener Filter 5.5 Matched Wiener Filter Without Bias 5.6 Timing Reconstruction, Optimal Filtering, and Selection Criteria 5.7 Forward Correction 5.8 Sparse Signal Restoration 5.9 Artificial Neural Networks 6 Simulation of the ATLAS Liquid-Argon Calorimeter Readout Electronics 6.1 AREUS 6.2 Hit Generation and Sampling 6.3 Pulse Shapes 6.4 Thermal Noise 6.5 Quantization 6.6 Digital Filters 6.7 Statistical Analysis 7 Results of the Readout Electronics Simulation Studies 7.1 Statistical Treatment 7.2 Simulation Verification Using Run-2 Data 7.3 Dependence of the Noise on the Shaping Time 7.4 The Analog Readout Electronics and the ADC 7.5 The Optimal Filter (OF) 7.6 The Wiener Filter 7.7 The Wiener Filter with Forward Correction (WFFC) 7.8 Final Comparison and Conclusions 8 Conclusions and Outlook Appendice

    Finite-horizon optimal control of linear and a class of nonlinear systems

    Get PDF
    Traditionally, optimal control of dynamical systems with known system dynamics is obtained in a backward-in-time and offline manner either by using Riccati or Hamilton-Jacobi-Bellman (HJB) equation. In contrast, in this dissertation, finite-horizon optimal regulation has been investigated for both linear and nonlinear systems in a forward-in-time manner when system dynamics are uncertain. Value and policy iterations are not used while the value function (or Q-function for linear systems) and control input are updated once a sampling interval consistent with standard adaptive control. First, the optimal adaptive control of linear discrete-time systems with unknown system dynamics is presented in Paper I by using Q-learning and Bellman equation while satisfying the terminal constraint. A novel update law that uses history information of the cost to go is derived. Paper II considers the design of the linear quadratic regulator in the presence of state and input quantization. Quantization errors are eliminated via a dynamic quantizer design and the parameter update law is redesigned from Paper I. Furthermore, an optimal adaptive state feedback controller is developed in Paper III for the general nonlinear discrete-time systems in affine form without the knowledge of system dynamics. In Paper IV, a NN-based observer is proposed to reconstruct the state vector and identify the dynamics so that the control scheme from Paper III is extended to output feedback. Finally, the optimal regulation of quantized nonlinear systems with input constraint is considered in Paper V by introducing a non-quadratic cost functional. Closed-loop stability is demonstrated for all the controller designs developed in this dissertation by using Lyapunov analysis while all the proposed schemes function in an online and forward-in-time manner so that they are practically viable --Abstract, page iv

    Sensing and Compression Techniques for Environmental and Human Sensing Applications

    Get PDF
    In this doctoral thesis, we devise and evaluate a variety of lossy compression schemes for Internet of Things (IoT) devices such as those utilized in environmental wireless sensor networks (WSNs) and Body Sensor Networks (BSNs). We are especially concerned with the efficient acquisition of the data sensed by these systems and to this end we advocate the use of joint (lossy) compression and transmission techniques. Environmental WSNs are considered first. For these, we present an original compressive sensing (CS) approach for the spatio-temporal compression of data. In detail, we consider temporal compression schemes based on linear approximations as well as Fourier transforms, whereas spatial and/or temporal dynamics are exploited through compression algorithms based on distributed source coding (DSC) and several algorithms based on compressive sensing (CS). To the best of our knowledge, this is the first work presenting a systematic performance evaluation of these (different) lossy compression approaches. The selected algorithms are framed within the same system model, and a comparative performance assessment is carried out, evaluating their energy consumption vs the attainable compression ratio. Hence, as a further main contribution of this thesis, we design and validate a novel CS-based compression scheme, termed covariogram-based compressive sensing (CB-CS), which combines a new sampling mechanism along with an original covariogram-based approach for the online estimation of the covariance structure of the signal. As a second main research topic, we focus on modern wearable IoT devices which enable the monitoring of vital parameters such as heart or respiratory rates (RESP), electrocardiography (ECG), and photo-plethysmographic (PPG) signals within e-health applications. These devices are battery operated and communicate the vital signs they gather through a wireless communication interface. A common issue of this technology is that signal transmission is often power-demanding and this poses serious limitations to the continuous monitoring of biometric signals. To ameliorate this, we advocate the use of lossy signal compression at the source: this considerably reduces the size of the data that has to be sent to the acquisition point by, in turn, boosting the battery life of the wearables and allowing for fine-grained and long-term monitoring. Considering one dimensional biosignals such as ECG, RESP and PPG, which are often available from commercial wearable devices, we first provide a throughout review of existing compression algorithms. Hence, we present novel approaches based on online dictionaries, elucidating their operating principles and providing a quantitative assessment of compression, reconstruction and energy consumption performance of all schemes. As part of this first investigation, dictionaries are built using a suboptimal but lightweight, online and best effort algorithm. Surprisingly, the obtained compression scheme is found to be very effective both in terms of compression efficiencies and reconstruction accuracy at the receiver. This approach is however not yet amenable to its practical implementation as its memory usage is rather high. Also, our systematic performance assessment reveals that the most efficient compression algorithms allow reductions in the signal size of up to 100 times, which entail similar reductions in the energy demand, by still keeping the reconstruction error within 4 % of the peak-to-peak signal amplitude. Based on what we have learned from this first comparison, we finally propose a new subject-specific compression technique called SURF Subject-adpative Unsupervised ecg compressor for weaRable Fitness monitors. In SURF, dictionaries are learned and maintained using suitable neural network structures. Specifically, learning is achieve through the use of neural maps such as self organizing maps and growing neural gas networks, in a totally unsupervised manner and adapting the dictionaries to the signal statistics of the wearer. As our results show, SURF: i) reaches high compression efficiencies (reduction in the signal size of up to 96 times), ii) allows for reconstruction errors well below 4 % (peak-to-peak RMSE, errors of 2 % are generally achievable), iii) gracefully adapts to changing signal statistics due to switching to a new subject or changing their activity, iv) has low memory requirements (lower than 50 kbytes) and v) allows for further reduction in the total energy consumption (processing plus transmission). These facts makes SURF a very promising algorithm, delivering the best performance among all the solutions proposed so far

    Compressed Sensing in Resource-Constrained Environments: From Sensing Mechanism Design to Recovery Algorithms

    Get PDF
    Compressed Sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. It is promising that CS can be utilized in environments where the signal acquisition process is extremely difficult or costly, e.g., a resource-constrained environment like the smartphone platform, or a band-limited environment like visual sensor network (VSNs). There are several challenges to perform sensing due to the characteristic of these platforms, including, for example, needing active user involvement, computational and storage limitations and lower transmission capabilities. This dissertation focuses on the study of CS in resource-constrained environments. First, we try to solve the problem on how to design sensing mechanisms that could better adapt to the resource-limited smartphone platform. We propose the compressed phone sensing (CPS) framework where two challenging issues are studied, the energy drainage issue due to continuous sensing which may impede the normal functionality of the smartphones and the requirement of active user inputs for data collection that may place a high burden on the user. Second, we propose a CS reconstruction algorithm to be used in VSNs for recovery of frames/images. An efficient algorithm, NonLocal Douglas-Rachford (NLDR), is developed. NLDR takes advantage of self-similarity in images using nonlocal means (NL) filtering. We further formulate the nonlocal estimation as the low-rank matrix approximation problem and solve the constrained optimization problem using Douglas-Rachford splitting method. Third, we extend the NLDR algorithm to surveillance video processing in VSNs and propose recursive Low-rank and Sparse estimation through Douglas-Rachford splitting (rLSDR) method for recovery of the video frame into a low-rank background component and sparse component that corresponds to the moving object. The spatial and temporal low-rank features of the video frame, e.g., the nonlocal similar patches within the single video frame and the low-rank background component residing in multiple frames, are successfully exploited

    COMPRESSIVE IMAGING AND DUAL MOIRE´ LASER INTERFEROMETER AS METROLOGY TOOLS

    Get PDF
    Metrology is the science of measurement and deals with measuring different physical aspects of objects. In this research the focus has been on two basic problems that metrologists encounter. The first problem is the trade-off between the range of measurement and the corresponding resolution; measurement of physical parameters of a large object or scene accompanies by losing detailed information about small regions of the object. Indeed, instruments and techniques that perform coarse measurements are different from those that make fine measurements. This problem persists in the field of surface metrology, which deals with accurate measurement and detailed analysis of surfaces. For example, laser interferometry is used for fine measurement (in nanometer scale) while to measure the form of in object, which lies in the field of coarse measurement, a different technique like moire technique is used. We introduced a new technique to combine measurement from instruments with better resolution and smaller measurement range with those with coarser resolution and larger measurement range. We first measure the form of the object with coarse measurement techniques and then make some fine measurement for features in regions of interest. The second problem is the measurement conditions that lead to difficulties in measurement. These conditions include low light condition, large range of intensity variation, hyperspectral measurement, etc. Under low light condition there is not enough light for detector to detect light from object, which results in poor measurements. Large range of intensity variation results in a measurement with some saturated regions on the camera as well as some dark regions. We use compressive sampling based imaging systems to address these problems. Single pixel compressive imaging uses a single detector instead of array of detectors and reconstructs a complete image after several measurements. In this research we examined compressive imaging for different applications including low light imaging, high dynamic range imaging and hyperspectral imaging
    corecore