25 research outputs found

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Hardware-Conscious Wireless Communication System Design

    Get PDF
    The work at hand is a selection of topics in efficient wireless communication system design, with topics logically divided into two groups.One group can be described as hardware designs conscious of their possibilities and limitations. In other words, it is about hardware that chooses its configuration and properties depending on the performance that needs to be delivered and the influence of external factors, with the goal of keeping the energy consumption as low as possible. Design parameters that trade off power with complexity are identified for analog, mixed signal and digital circuits, and implications of these tradeoffs are analyzed in detail. An analog front end and an LDPC channel decoder that adapt their parameters to the environment (e.g. fluctuating power level due to fading) are proposed, and it is analyzed how much power/energy these environment-adaptive structures save compared to non-adaptive designs made for the worst-case scenario. Additionally, the impact of ADC bit resolution on the energy efficiency of a massive MIMO system is examined in detail, with the goal of finding bit resolutions that maximize the energy efficiency under various system setups.In another group of themes, one can recognize systems where the system architect was conscious of fundamental limitations stemming from hardware.Put in another way, in these designs there is no attempt of tweaking or tuning the hardware. On the contrary, system design is performed so as to work around an existing and unchangeable hardware limitation. As a workaround for the problematic centralized topology, a massive MIMO base station based on the daisy chain topology is proposed and a method for signal processing tailored to the daisy chain setup is designed. In another example, a large group of cooperating relays is split into several smaller groups, each cooperatively performing relaying independently of the others. As cooperation consumes resources (such as bandwidth), splitting the system into smaller, independent cooperative parts helps save resources and is again an example of a workaround for an inherent limitation.From the analyses performed in this thesis, promising observations about hardware consciousness can be made. Adapting the structure of a hardware block to the environment can bring massive savings in energy, and simple workarounds prove to perform almost as good as the inherently limited designs, but with the limitation being successfully bypassed. As a general observation, it can be concluded that hardware consciousness pays off

    On implementation aspects of decode and forward and compress and forward relay protocols

    Get PDF
    In this work, the common relay protocols Decode-and-Forward and Compress-and-Forward (CF) are investigated from a practical point of view: This involves on the one hand the impact of imperfections like channel and carrier phase stimation errors and on the other hand, the question of how to implement relay protocol specific signal processing like quantization for CF which is modeled in information theory simply by additive quantizer noise. To evaluate the performance, achievable rates are determined either numerically with the help of the Max-Flow Min-Cut theorem or by link level simulations.Diese Arbeit untersucht die Relay-Protokolle Decode-and-Forward und Compress-and-Forward (CF) mit dem Fokus auf einer praktischen Umsetzung. Es werden sowohl Störeinflüsse wie Kanal- und Phasenschätzfehler betrachtet als auch spezielle Kompressionsverfahren für das CF Protokoll implementiert. Von großer Bedeutung ist hier die Kompression in Form der Quantisierung, weil diese in der Informationstheorie lediglich durch Quantisierungsrauschen modelliert wird. Zur Auswertung der Leistungsfähigkeit der Protokolle werden die erzielbaren Raten entweder numerisch oder durch Simulation bestimmt

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    NASA Tech Briefs, September 2007

    Get PDF
    Topics covered include; Rapid Fabrication of Carbide Matrix/Carbon Fiber Composites; Coating Thermoelectric Devices To Suppress Sublimation; Ultrahigh-Temperature Ceramics; Improved C/SiC Ceramic Composites Made Using PIP; Coating Carbon Fibers With Platinum; Two-Band, Low-Loss Microwave Window; MCM Polarimetric Radiometers for Planar Arrays; Aperture-Coupled Thin-Membrane L-Band Antenna; WGM-Based Photonic Local Oscillators and Modulators; Focal-Plane Arrays of Quantum-Dot Infrared Photodetectors; Laser Range and Bearing Finder With No Moving Parts; Microrectenna: A Terahertz Antenna and Rectifier on a Chip; Miniature L-Band Radar Transceiver; Robotic Vision-Based Localization in an Urban Environment; Programs for Testing an SSME-Monitoring System; Cathodoluminescent Source of Intense White Light; Displaying and Analyzing Antenna Radiation Patterns; Payload Operations Support Team Tools; Space-Shuttle Emulator Software; Soft Real-Time PID Control on a VME Computer; Analyzing Radio-Frequency Coverage for the ISS; Nanorod-Based Fast-Response Pressure-Sensitive Paints; Capacitors Would Help Protect Against Hypervelocity Impacts; Diaphragm Pump With Resonant Piezoelectric Drive; Improved Quick-Release Pin Mechanism; Designing Rolling-Element Bearings; Reverse-Tangent Injection in a Centrifugal Compressor; Inertial Measurements for Aero-assisted Navigation (IMAN); Analysis of Complex Valve and Feed Systems; Improved Path Planning Onboard the Mars Exploration Rovers; Robust, Flexible Motion Control for the Mars Explorer Rovers; Solar Sail Spaceflight Simulation; Fluorine-Based DRIE of Fused Silica; Mechanical Alloying for Making Thermoelectric Compounds; Process for High-Rate Fabrication of Alumina Nanotemplates; Electroform/Plasma-Spray Laminates for X-Ray Optics; An Automated Flying-Insect Detection System; Calligraphic Poling of Ferroelectric Material; Blackbody Cavity for Calibrations at 200 to 273 K; KML Super Overlay to WMS Translator; High-Performance Tiled WMS and KML Web Server; Modeling of Radiative Transfer in Protostellar Disks; Composite Pulse Tube; Photometric Calibration of Consumer Video Cameras; Criterion for Identifying Vortices in High- Pressure Flows; Amplified Thermionic Cooling Using Arrays of Nanowires; Delamination-Indicating Thermal Barrier Coatings; Preventing Raman Lasing in High-Q WGM Resonators; Procedures for Tuning a Multiresonator Photonic Filter; Robust Mapping of Incoherent Fiber-Optic Bundles; Extended-Range Ultrarefractive 1D Photonic Crystal Prisms; Rapid Analysis of Mass Distribution of Radiation Shielding; Modeling Magnetic Properties in EZTB; Deep Space Network Antenna Logic Controller; Modeling Carbon and Hydrocarbon Molecular Structures in EZTB; BigView Image Viewing on Tiled Displays; and Imaging Sensor Flight and Test Equipment Software

    Rake, Peel, Sketch:The Signal Processing Pipeline Revisited

    Get PDF
    The prototypical signal processing pipeline can be divided into four blocks. Representation of the signal in a basis suitable for processing. Enhancement of the meaningful part of the signal and noise reduction. Estimation of important statistical properties of the signal. Adaptive processing to track and adapt to changes in the signal statistics. This thesis revisits each of these blocks and proposes new algorithms, borrowing ideas from information theory, theoretical computer science, or communications. First, we revisit the Walsh-Hadamard transform (WHT) for the case of a signal sparse in the transformed domain, namely that has only K †N non-zero coefficients. We show that an efficient algorithm exists that can compute these coefficients in O(K log2(K) log2(N/K)) and using only O(K log2(N/K)) samples. This algorithm relies on a fast hashing procedure that computes small linear combinations of transformed domain coefficients. A bipartite graph is formed with linear combinations on one side, and non-zero coefficients on the other. A peeling decoder is then used to recover the non-zero coefficients one by one. A detailed analysis of the algorithm based on error correcting codes over the binary erasure channel is given. The second chapter is about beamforming. Inspired by the rake receiver from wireless communications, we recognize that echoes in a room are an important source of extra signal diversity. We extend several classic beamforming algorithms to take advantage of echoes and also propose new optimal formulations. We explore formulations both in time and frequency domains. We show theoretically and in numerical simulations that the signal-to-interference-and-noise ratio increases proportionally to the number of echoes used. Finally, beyond objective measures, we show that echoes also directly improve speech intelligibility as measured by the perceptual evaluation of speech quality (PESQ) metric. Next, we attack the problem of direction of arrival of acoustic sources, to which we apply a robust finite rate of innovation reconstruction framework. FRIDA â the resulting algorithm â exploits wideband information coherently, works at very low signal-to-noise ratio, and can resolve very close sources. The algorithm can use either raw microphone signals or their cross- correlations. While the former lets us work with correlated sources, the latter creates a quadratic number of measurements that allows to locate many sources with few microphones. Thorough experiments on simulated and recorded data shows that FRIDA compares favorably with the state-of-the-art. We continue by revisiting the classic recursive least squares (RLS) adaptive filter with ideas borrowed from recent results on sketching least squares problems. The exact update of RLS is replaced by a few steps of conjugate gradient descent. We propose then two different precondi- tioners, obtained by sketching the data, to accelerate the convergence of the gradient descent. Experiments on artificial as well as natural signals show that the proposed algorithm has a performance very close to that of RLS at a lower computational burden. The fifth and final chapter is dedicated to the software and hardware tools developed for this thesis. We describe the pyroomacoustics Python package that contains routines for the evaluation of audio processing algorithms and reference implementations of popular algorithms. We then give an overview of the microphone arrays developed

    Performance evaluation of T-transform based OFDM in underwater acoustic channels

    Get PDF
    PhD ThesisRecently there has been an increasing trend towards the implementation of orthogonal frequency division multiplexing (OFDM) based multicarrier communication systems in underwater acoustic communications. By dividing the available bandwidth into multiple sub-bands, OFDM systems enable reliable transmission over long range dispersive channels. However OFDM is prone to impairments such as severe frequency selective fading channels, motioned induced Doppler shift and high peak-to-average-power ratio (PAPR). In order to fully exploit the potential of OFDM in UWA channels, those issues have received a great deal of attention in recent research. With the aim of improving OFDM's performance in UWA channels, a T-transformed based OFDM system is introduced using a low computational complexity T-transform that combines the Walsh-Hadamard transform (WHT) and the discrete Fourier transform (DFT) into a single fast orthonormal unitary transform. Through real-world experiment, performance comparison between the proposed T-OFDM system and conventional OFDM system revealed that T-OFDM performs better than OFDM with high code rate in frequency selective fading channels. Furthermore, investigation of different equalizer techniques have shown that the limitation of ZF equalizers affect the T-OFDM more (one bad equalizer coefficient affects all symbols) and so developed a modified ZF equalizer with outlier detection which provides major performance gain without excessive computation load. Lastly, investigation of PAPR reduction methods delineated that T-OFDM has inherently lower PAPR and it is also far more tolerant of distortions introduced by the simple clipping method. As a result, lower PAPR can be achieved with minimal overhead and so outperforming OFDM for a given power limit at the transmitter

    Heterogeneous Reconfigurable Fabrics for In-circuit Training and Evaluation of Neuromorphic Architectures

    Get PDF
    A heterogeneous device technology reconfigurable logic fabric is proposed which leverages the cooperating advantages of distinct magnetic random access memory (MRAM)-based look-up tables (LUTs) to realize sequential logic circuits, along with conventional SRAM-based LUTs to realize combinational logic paths. The resulting Hybrid Spin/Charge FPGA (HSC-FPGA) using magnetic tunnel junction (MTJ) devices within this topology demonstrates commensurate reductions in area and power consumption over fabrics having LUTs constructed with either individual technology alone. Herein, a hierarchical top-down design approach is used to develop the HSCFPGA starting from the configurable logic block (CLB) and slice structures down to LUT circuits and the corresponding device fabrication paradigms. This facilitates a novel architectural approach to reduce leakage energy, minimize communication occurrence and energy cost by eliminating unnecessary data transfer, and support auto-tuning for resilience. Furthermore, HSC-FPGA enables new advantages of technology co-design which trades off alternative mappings between emerging devices and transistors at runtime by allowing dynamic remapping to adaptively leverage the intrinsic computing features of each device technology. HSC-FPGA offers a platform for fine-grained Logic-In-Memory architectures and runtime adaptive hardware. An orthogonal dimension of fabric heterogeneity is also non-determinism enabled by either low-voltage CMOS or probabilistic emerging devices. It can be realized using probabilistic devices within a reconfigurable network to blend deterministic and probabilistic computational models. Herein, consider the probabilistic spin logic p-bit device as a fabric element comprising a crossbar-structured weighted array. The Programmability of the resistive network interconnecting p-bit devices can be achieved by modifying the resistive states of the array\u27s weighted connections. Thus, the programmable weighted array forms a CLB-scale macro co-processing element with bitstream programmability. This allows field programmability for a wide range of classification problems and recognition tasks to allow fluid mappings of probabilistic and deterministic computing approaches. In particular, a Deep Belief Network (DBN) is implemented in the field using recurrent layers of co-processing elements to form an n x m1 x m2 x ::: x mi weighted array as a configurable hardware circuit with an n-input layer followed by i ≥ 1 hidden layers. As neuromorphic architectures using post-CMOS devices increase in capability and network size, the utility and benefits of reconfigurable fabrics of neuromorphic modules can be anticipated to continue to accelerate
    corecore