523 research outputs found

    Fast and Accurate Computation of the Round-Off Noise of LTI Systems

    Full text link
    From its introduction in the last decade, affine arithmetic (AA) has shown beneficial properties to speed up the time of computation procedures in a wide variety of areas. In the determination of the optimum set of finite word-lengths of the digital signal processing systems, the use of AA has been recently suggested by several authors, but the existing procedures provide pessimistic results. The aim is to present a novel approach to compute the round-off noise (RON) using AA which is both faster and more accurate than the existing techniques and to justify that this type of computation is restricted to linear time-invariant systems. By a novel definition of AA-based models, this is the first methodology that performs interval-based computation of the RON. The provided comparative results show that the proposed technique is faster than the existing numerical ones with an observed speed-up ranging from 1.6 to 20.48, and that the application of discrete noise models leads to results up to five times more accurate than the traditional estimations

    Optimization of Planck/LFI on--board data handling

    Get PDF
    To asses stability against 1/f noise, the Low Frequency Instrument (LFI) onboard the Planck mission will acquire data at a rate much higher than the data rate allowed by its telemetry bandwith of 35.5 kbps. The data are processed by an onboard pipeline, followed onground by a reversing step. This paper illustrates the LFI scientific onboard processing to fit the allowed datarate. This is a lossy process tuned by using a set of 5 parameters Naver, r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level of distortion introduced by the onboard processing, EpsilonQ, as a function of these parameters. It describes the method of optimizing the onboard processing chain. The tuning procedure is based on a optimization algorithm applied to unprocessed and uncompressed raw data provided either by simulations, prelaunch tests or data taken from LFI operating in diagnostic mode. All the needed optimization steps are performed by an automated tool, OCA2, which ends with optimized parameters and produces a set of statistical indicators, among them the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr = 2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup the process an analytical model is developed that is able to extract most of the relevant information on EpsilonQ and Cr as a function of the signal statistics and the processing parameters. This model will be of interest for the instrument data analysis. The method was applied during ground tests when the instrument was operating in conditions representative of flight. Optimized parameters were obtained and the performance has been verified, the required data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of 3.8% of white noise rms well within the requirements.Comment: 51 pages, 13 fig.s, 3 tables, pdflatex, needs JINST.csl, graphicx, txfonts, rotating; Issue 1.0 10 nov 2009; Sub. to JINST 23Jun09, Accepted 10Nov09, Pub.: 29Dec09; This is a preprint, not the final versio

    Finite worldlength effects in fixed-point implementations of linear systems

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 173-194).by Vinay Mohta.M.Eng

    Pacti: Scaling Assume-Guarantee Reasoning for System Analysis and Design

    Full text link
    Contract-based design is a method to facilitate modular system design. While there has been substantial progress on the theory of contracts, there has been less progress on scalable algorithms for the algebraic operations in this theory. In this paper, we present: 1) principles to implement a contract-based design tool at scale and 2) Pacti, a tool that can efficiently compute these operations. We then illustrate the use of Pacti in a variety of case studies

    Signal processing using short word-length

    Get PDF
    Recently short word-length (normally 1 bit or bits) processing has become a promising technique. However, there are unresolved issues in sigma-delta modulation, which is the basis for 1b/2b systems. These issues hindered the full adoption of single-bit techniues in industry. Among these problems is the stability of high-order modulators and the limit cycle behaviour. More importantly, there is no adaptive LMS structure of any kind in 1b/2b domain. The challenge in this problem is the harsh quantization that prevents straightforward LMS application. In this thesis, the focus has been made on three axes: designing new single-bit DSP applications, proposing novel approaches for stability analysis, and tacking the unresolved problems of 1b/2b adaptive filtering. Two structures for 1b digital comb filtering are proposed. A ternary DC blocker structure is also presented and performanc e is tested. We also proposed a single-bit multiplierless DC-blocking structure. The stability of a single-bit high-order signma-delta modulator is studied under dc inputs. A new approach for stability analysis is proposed based on analogy with PLL analysis. Finally we succeeded in designing 1b/2b Wiener-like filtering and introduced (for the first time) three 1b/2b adaptive schemes

    Digital pulse processing

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 71-74).This thesis develops an exact approach for processing pulse signals from an integrate-and-fire system directly in the time-domain. Processing is deterministic and built from simple asynchronous finite-state machines that can perform general piecewise-linear operations. The pulses can then be converted back into an analog or fixed-point digital representation through a filter-based reconstruction. Integrate-and-fire is shown to be equivalent to the first-order sigma-delta modulation used in oversampled noise-shaping converters. The encoder circuits are well known and have simple construction using both current and next-generation technologies. Processing in the pulse-domain provides many benefits including: lower area and power consumption, error tolerance, signal serialization and simple conversion for mixed-signal applications. To study these systems, discrete-event simulation software and an FPGA hardware platform are developed. Many applications of pulse-processing are explored including filtering and signal processing, solving differential equations, optimization, the minsum / Viterbi algorithm, and the decoding of low-density parity-check codes (LDPC). These applications often match the performance of ideal continuous-time analog systems but only require simple digital hardware. Keywords: time-encoding, spike processing, neuromorphic engineering, bit-stream, delta-sigma, sigma-delta converters, binary-valued continuous-time, relaxation-oscillators.by Martin McCormick.S.M

    Massive Multi-Antenna Communications with Low-Resolution Data Converters

    Get PDF
    Massive multi-user (MU) multiple-input multiple-output (MIMO) will be a core technology in future cellular communication systems. In massive MU-MIMO systems, the number of antennas at the base station (BS) is scaled up by several orders of magnitude compared to traditional multi-antenna systems with the goals of enabling large gains in capacity and energy efficiency. However, scaling up the number of active antenna elements at the BS will lead to significant increases in power consumption and system costs unless power-efficient and low-cost hardware components are used. In this thesis, we investigate the performance of massive MU-MIMO systems for the case when the BS is equipped with low-resolution data converters.First, we consider the massive MU-MIMO uplink for the case when the BS uses low-resolution analog-to-digital converters (ADCs) to convert the received signal into the digital domain. Our focus is on the case where neither the transmitter nor the receiver have any a priori channel state information (CSI), which implies that the channel realizations have to be learned through pilot transmission followed by BS-side channel estimation, based on coarsely quantized observations. We derive a low-complexity channel estimator and present lower bounds and closed-form approximations for the information-theoretic rates achievable with the proposed channel estimator together with conventional linear detection algorithms. Second, we consider the massive MU-MIMO downlink for the case when the BS uses low-resolution digital-to-analog converters (DACs) to generate the transmit signal. We derive lower bounds and closed-form approximations for the achievable rates with linear precoding under the assumption that the BS has access to perfect CSI. We also propose novel nonlinear precoding algorithms that are shown to significantly outperform linear precoding for the extreme case of 1-bit DACs. Specifically, for the case of symbol-rate 1-bit DACs and frequency-flat channels, we develop a multitude of nonlinear precoders that trade between performance and complexity. We then extend the most promising nonlinear precoders to the case of oversampling 1-bit DACs and orthogonal frequency-division multiplexing for operation over frequency-selective channels.Third, we extend our analysis to take into account other hardware imperfections such as nonlinear amplifiers and local oscillators with phase noise.The results in this thesis suggest that the resolution of the ADCs and DACs in massive MU-MIMO systems can be reduced significantly compared to what is used in today\u27s state-of-the-art communication systems, without significantly reducing the overall system performance

    The design of multiconfiguration axisymmetric optical systems

    Get PDF
    Imperial Users onl
    corecore