6 research outputs found

    Synthesis of Minimal Error Control Software

    Full text link
    Software implementations of controllers for physical systems are at the core of many embedded systems. The design of controllers uses the theory of dynamical systems to construct a mathematical control law that ensures that the controlled system has certain properties, such as asymptotic convergence to an equilibrium point, while optimizing some performance criteria. However, owing to quantization errors arising from the use of fixed-point arithmetic, the implementation of this control law can only guarantee practical stability: under the actions of the implementation, the trajectories of the controlled system converge to a bounded set around the equilibrium point, and the size of the bounded set is proportional to the error in the implementation. The problem of verifying whether a controller implementation achieves practical stability for a given bounded set has been studied before. In this paper, we change the emphasis from verification to automatic synthesis. Using synthesis, the need for formal verification can be considerably reduced thereby reducing the design time as well as design cost of embedded control software. We give a methodology and a tool to synthesize embedded control software that is Pareto optimal w.r.t. both performance criteria and practical stability regions. Our technique is a combination of static analysis to estimate quantization errors for specific controller implementations and stochastic local search over the space of possible controllers using particle swarm optimization. The effectiveness of our technique is illustrated using examples of various standard control systems: in most examples, we achieve controllers with close LQR-LQG performance but with implementation errors, hence regions of practical stability, several times as small.Comment: 18 pages, 2 figure

    Semantics-preserving cosynthesis of cyber-physical systems

    Get PDF

    On the Generation of Precise Fixed-Point Expressions

    Get PDF
    Several problems in the implementations of control systems, signal-processing systems, and scientific computing systems reduce to compiling a polynomial expression over the reals into an imperative program using fixed-point arithmetic. Fixed-point arithmetic only approximates real values, and its operators do not have the fundamental properties of real arithmetic, such as associativity. Consequently, a naive compilation process can yield a program that significantly deviates from the real polynomial, whereas a different order of evaluation can result in a program that is close to the real value on all inputs in its domain. We present a compilation scheme for real-valued arithmetic expressions to fixed-point arithmetic programs. Given a real-valued polynomial expression t, we find an expression t' that is equivalent to t over the reals, but whose implementation as a series of fixed-point operations minimizes the error between the fixed-point value and the value of t over the space of all inputs. We show that the corresponding decision problem, checking whether there is an implementation t' of t whose error is less than a given constant, is NP-hard. We then propose a solution technique based on genetic programming. Our technique evaluates the fitness of each candidate program using a static analysis based on affine arithmetic. We show that our tool can significantly reduce the error in the fixed-point implementation on a set of linear control system benchmarks. For example, our tool found implementations whose errors are only one half of the errors in the original fixed-point expressions

    Programming with Numerical Uncertainties

    Get PDF
    Numerical software, common in scientific computing or embedded systems, inevitably uses an approximation of the real arithmetic in which most algorithms are designed. In many domains, roundoff errors are not the only source of inaccuracy and measurement as well as truncation errors further increase the uncertainty of the computed results. Adequate tools are needed to help users select suitable approximations (data types and algorithms) which satisfy their accuracy requirements, especially for safety- critical applications. Determining that a computation produces accurate results is challenging. Roundoff errors and error propagation depend on the ranges of variables in complex and non-obvious ways; even determining these ranges accurately for nonlinear programs poses a significant challenge. In numerical loops, roundoff errors grow, in general, unboundedly. Finally, due to numerical errors, the control flow in the finite-precision implementation may diverge from the ideal real-valued one by taking a different branch and produce a result that is far-off of the expected one. In this thesis, we present techniques and tools for automated and sound analysis, verification and synthesis of numerical programs. We focus on numerical errors due to roundoff from floating-point and fixed-point arithmetic, external input uncertainties or truncation errors. Our work uses interval or affine arithmetic together with Satisfiability Modulo Theories (SMT) technology as well as analytical properties of the underlying mathematical problems. This combination of techniques enables us to compute sound and yet accurate error bounds for nonlinear computations, determine closed-form symbolic invariants for unbounded loops and quantify the effects of discontinuities on numerical errors. We can furthermore certify the results of self-correcting iterative algorithms. Accuracy usually comes at the expense of resource efficiency: more precise data types need more time, space and energy. We propose a programming model where the scientist writes his or her numerical program in a real-valued specification language with explicit error annotations. It is then the task of our verifying compiler to select a suitable floating-point or fixed-point data type which guarantees the needed accuracy. Sometimes accuracy can be gained by simply re-arranging the non-associative finite-precision computation. We present a scalable technique that searches for a more optimal evaluation order and show that the gains can be substantial. We have implemented all our techniques and evaluated them on a number of benchmarks from scientific computing and embedded systems, with promising results
    corecore