8,766 research outputs found

    Performance Analysis for Time-of-Arrival Estimation with Oversampled Low-Complexity 1-bit A/D Conversion

    Full text link
    Analog-to-digtial (A/D) conversion plays a crucial role when it comes to the design of energy-efficient and fast signal processing systems. As its complexity grows exponentially with the number of output bits, significant savings are possible when resorting to a minimum resolution of a single bit. However, then the nonlinear effect which is introduced by the A/D converter results in a pronounced performance loss, in particular for the case when the receiver is operated outside the low signal-to-noise ratio (SNR) regime. By trading the A/D resolution for a moderately faster sampling rate, we show that for time-of-arrival (TOA) estimation under any SNR level it is possible to obtain a low-complexity 11-bit receive system which features a smaller performance degradation then the classical low SNR hard-limiting loss of 2/π2/\pi (−1.96-1.96 dB). Key to this result is the employment of a lower bound for the Fisher information matrix which enables us to approximate the estimation performance for coarsely quantized receivers with correlated noise models in a pessimistic way

    Performance Analysis for Time-of-Arrival Estimation with Oversampled Low-Complexity 1-bit A/D Conversion

    Full text link
    Analog-to-digtial (A/D) conversion plays a crucial role when it comes to the design of energy-efficient and fast signal processing systems. As its complexity grows exponentially with the number of output bits, significant savings are possible when resorting to a minimum resolution of a single bit. However, then the nonlinear effect which is introduced by the A/D converter results in a pronounced performance loss, in particular for the case when the receiver is operated outside the low signal-to-noise ratio (SNR) regime. By trading the A/D resolution for a moderately faster sampling rate, we show that for time-of-arrival (TOA) estimation under any SNR level it is possible to obtain a low-complexity 11-bit receive system which features a smaller performance degradation then the classical low SNR hard-limiting loss of 2/π2/\pi (−1.96-1.96 dB). Key to this result is the employment of a lower bound for the Fisher information matrix which enables us to approximate the estimation performance for coarsely quantized receivers with correlated noise models in a pessimistic way

    Error Correcting Codes for Distributed Control

    Get PDF
    The problem of stabilizing an unstable plant over a noisy communication link is an increasingly important one that arises in applications of networked control systems. Although the work of Schulman and Sahai over the past two decades, and their development of the notions of "tree codes"\phantom{} and "anytime capacity", provides the theoretical framework for studying such problems, there has been scant practical progress in this area because explicit constructions of tree codes with efficient encoding and decoding did not exist. To stabilize an unstable plant driven by bounded noise over a noisy channel one needs real-time encoding and real-time decoding and a reliability which increases exponentially with decoding delay, which is what tree codes guarantee. We prove that linear tree codes occur with high probability and, for erasure channels, give an explicit construction with an expected decoding complexity that is constant per time instant. We give novel sufficient conditions on the rate and reliability required of the tree codes to stabilize vector plants and argue that they are asymptotically tight. This work takes an important step towards controlling plants over noisy channels, and we demonstrate the efficacy of the method through several examples.Comment: 39 page

    Operational experience, improvements, and performance of the CDF Run II silicon vertex detector

    Full text link
    The Collider Detector at Fermilab (CDF) pursues a broad physics program at Fermilab's Tevatron collider. Between Run II commissioning in early 2001 and the end of operations in September 2011, the Tevatron delivered 12 fb-1 of integrated luminosity of p-pbar collisions at sqrt(s)=1.96 TeV. Many physics analyses undertaken by CDF require heavy flavor tagging with large charged particle tracking acceptance. To realize these goals, in 2001 CDF installed eight layers of silicon microstrip detectors around its interaction region. These detectors were designed for 2--5 years of operation, radiation doses up to 2 Mrad (0.02 Gy), and were expected to be replaced in 2004. The sensors were not replaced, and the Tevatron run was extended for several years beyond its design, exposing the sensors and electronics to much higher radiation doses than anticipated. In this paper we describe the operational challenges encountered over the past 10 years of running the CDF silicon detectors, the preventive measures undertaken, and the improvements made along the way to ensure their optimal performance for collecting high quality physics data. In addition, we describe the quantities and methods used to monitor radiation damage in the sensors for optimal performance and summarize the detector performance quantities important to CDF's physics program, including vertex resolution, heavy flavor tagging, and silicon vertex trigger performance.Comment: Preprint accepted for publication in Nuclear Instruments and Methods A (07/31/2013

    One-bit Distributed Sensing and Coding for Field Estimation in Sensor Networks

    Full text link
    This paper formulates and studies a general distributed field reconstruction problem using a dense network of noisy one-bit randomized scalar quantizers in the presence of additive observation noise of unknown distribution. A constructive quantization, coding, and field reconstruction scheme is developed and an upper-bound to the associated mean squared error (MSE) at any point and any snapshot is derived in terms of the local spatio-temporal smoothness properties of the underlying field. It is shown that when the noise, sensor placement pattern, and the sensor schedule satisfy certain weak technical requirements, it is possible to drive the MSE to zero with increasing sensor density at points of field continuity while ensuring that the per-sensor bitrate and sensing-related network overhead rate simultaneously go to zero. The proposed scheme achieves the order-optimal MSE versus sensor density scaling behavior for the class of spatially constant spatio-temporal fields.Comment: Fixed typos, otherwise same as V2. 27 pages (in one column review format), 4 figures. Submitted to IEEE Transactions on Signal Processing. Current version is updated for journal submission: revised author list, modified formulation and framework. Previous version appeared in Proceedings of Allerton Conference On Communication, Control, and Computing 200
    • …
    corecore