4 research outputs found

    Embedded ISA Support for Enhanced Floating-Point to Fixed-Point ANSI C Compilation

    No full text
    Recently tools for automating the translation of floatingpoint signal-processing applications written in ANSI C into fixed-point have been presented [34, 17, 8]. This paper introduces a novel fixed-point instruction-set operation, Fractional Multiplication with internal Left Shift (FMLS), and an associated translation algorithm—Intermediate-Result-Profiling based Shift Absorption (IRP-SA), that enhance fixedpoint rounding-noise and runtime performance. A significant feature of FMLS is that it is well suited to the latest generation of embedded processors that maintain relatively homogeneous register architectures. FMLS may improve the rounding-noise performance of fractional multiplication operations in three ways depending upon the specific fixed-point scaling properties an application exhibits. The IRP-SA algorithm enhances this by exploiting the modular nature of 2’s-complement addition which allows the discarding of most-significant-bits that are redundant due to inter-operand correlations. Rounding-noise reductions equivalent to carrying as much as 2.0 additional bits of precision throughout the computation are presented. Furthermore, by encoding a very limited set of output shift values (two left, one left, none, and one right) into the FMLS operation, speedups of up to 13 percent are observed. 1

    Embedded ISA Support for Enhanced Floating-Point to Fixed-Point ANSI C Compilation

    No full text
    Recently tools for automating the translation of oatingpoint signal-processing applications written in ANSI C into fixed-point have been presented [32, 16, 8]. This paper introduces a novel fixed-point instruction-set operation, Fractional Multiplication with internal Left Shift (FMLS), and an associated translation algorithm--Intermediate-Result- Profiling based Shift Absorption (IRP-SA), that enhance fixedpoint rounding-noise and runtime performance. A significant feature of FMLS is that it is well suited to the latest generation of embedded processors that maintain relatively homogeneous register architectures. FMLS may improve the rounding-noise performance of fractional multiplication operations in three ways depending upon the speci c fixed-point scaling properties an application exhibits. The IRP-SA algorithm enhances this by exploiting the modular nature of 2's-complement addition which allows the discarding of most-significant-bits that are redundant due to inter-operand correlati..

    On the Generation of Precise Fixed-Point Expressions

    Get PDF
    Several problems in the implementations of control systems, signal-processing systems, and scientific computing systems reduce to compiling a polynomial expression over the reals into an imperative program using fixed-point arithmetic. Fixed-point arithmetic only approximates real values, and its operators do not have the fundamental properties of real arithmetic, such as associativity. Consequently, a naive compilation process can yield a program that significantly deviates from the real polynomial, whereas a different order of evaluation can result in a program that is close to the real value on all inputs in its domain. We present a compilation scheme for real-valued arithmetic expressions to fixed-point arithmetic programs. Given a real-valued polynomial expression t, we find an expression t' that is equivalent to t over the reals, but whose implementation as a series of fixed-point operations minimizes the error between the fixed-point value and the value of t over the space of all inputs. We show that the corresponding decision problem, checking whether there is an implementation t' of t whose error is less than a given constant, is NP-hard. We then propose a solution technique based on genetic programming. Our technique evaluates the fitness of each candidate program using a static analysis based on affine arithmetic. We show that our tool can significantly reduce the error in the fixed-point implementation on a set of linear control system benchmarks. For example, our tool found implementations whose errors are only one half of the errors in the original fixed-point expressions
    corecore