2,821 research outputs found

    Accurate and efficient evaluation of the a posteriori error estimator in the reduced basis method

    Full text link
    The reduced basis method is a model reduction technique yielding substantial savings of computational time when a solution to a parametrized equation has to be computed for many values of the parameter. Certification of the approximation is possible by means of an a posteriori error bound. Under appropriate assumptions, this error bound is computed with an algorithm of complexity independent of the size of the full problem. In practice, the evaluation of the error bound can become very sensitive to round-off errors. We propose herein an explanation of this fact. A first remedy has been proposed in [F. Casenave, Accurate \textit{a posteriori} error evaluation in the reduced basis method. \textit{C. R. Math. Acad. Sci. Paris} \textbf{350} (2012) 539--542.]. Herein, we improve this remedy by proposing a new approximation of the error bound using the Empirical Interpolation Method (EIM). This method achieves higher levels of accuracy and requires potentially less precomputations than the usual formula. A version of the EIM stabilized with respect to round-off errors is also derived. The method is illustrated on a simple one-dimensional diffusion problem and a three-dimensional acoustic scattering problem solved by a boundary element method.Comment: 26 pages, 10 figures. ESAIM: Mathematical Modelling and Numerical Analysis, 201

    Transfer Function Synthesis without Quantifier Elimination

    Get PDF
    Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.Comment: 37 pages, extended version of ESOP 2011 pape

    Improving Strategies via SMT Solving

    Full text link
    We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Pi-p-2-complete

    Software for Exact Integration of Polynomials over Polyhedra

    Full text link
    We are interested in the fast computation of the exact value of integrals of polynomial functions over convex polyhedra. We present speed ups and extensions of the algorithms presented in previous work. We present the new software implementation and provide benchmark computations. The computation of integrals of polynomials over polyhedral regions has many applications; here we demonstrate our algorithmic tools solving a challenge from combinatorial voting theory.Comment: Major updat

    Ray casting implicit fractal surfaces with reduced affine arithmetic

    Get PDF
    A method is presented for ray casting implicit surfaces defined by fractal combinations of procedural noise functions. The method is robust and uses affine arithmetic to bound the variation of the implicit function along a ray. The method is also efficient due to a modification in the affine arithmetic representation that introduces a condensation step at the end of every non-affine operation. We show that our method is able to retain the tight estimation capabilities of affine arithmetic for ray casting implicit surfaces made from procedural noise functions while being faster to compute and more efficient to store

    Privacy-Preserving Shortest Path Computation

    Full text link
    Navigation is one of the most popular cloud computing services. But in virtually all cloud-based navigation systems, the client must reveal her location and destination to the cloud service provider in order to learn the fastest route. In this work, we present a cryptographic protocol for navigation on city streets that provides privacy for both the client's location and the service provider's routing data. Our key ingredient is a novel method for compressing the next-hop routing matrices in networks such as city street maps. Applying our compression method to the map of Los Angeles, for example, we achieve over tenfold reduction in the representation size. In conjunction with other cryptographic techniques, this compressed representation results in an efficient protocol suitable for fully-private real-time navigation on city streets. We demonstrate the practicality of our protocol by benchmarking it on real street map data for major cities such as San Francisco and Washington, D.C.Comment: Extended version of NDSS 2016 pape

    Certified Roundoff Error Bounds using Bernstein Expansions and Sparse Krivine-Stengle Representations

    Full text link
    Floating point error is an inevitable drawback of embedded systems implementation. Computing rigorous upper bounds of roundoff errors is absolutely necessary to the validation of critical software. This problem is even more challenging when addressing non-linear programs. In this paper, we propose and compare two new methods based on Bernstein expansions and sparse Krivine-Stengle representations, adapted from the field of the global optimization to compute upper bounds of roundoff errors for programs implementing polynomial functions. We release two related software package FPBern and FPKiSten, and compare them with state of the art tools. We show that these two methods achieve competitive performance, while computing accurate upper bounds by comparison with other tools.Comment: 20 pages, 2 table

    Certification of Real Inequalities -- Templates and Sums of Squares

    Full text link
    We consider the problem of certifying lower bounds for real-valued multivariate transcendental functions. The functions we are dealing with are nonlinear and involve semialgebraic operations as well as some transcendental functions like cos\cos, arctan\arctan, exp\exp, etc. Our general framework is to use different approximation methods to relax the original problem into polynomial optimization problems, which we solve by sparse sums of squares relaxations. In particular, we combine the ideas of the maxplus estimators (originally introduced in optimal control) and of the linear templates (originally introduced in static analysis by abstract interpretation). The nonlinear templates control the complexity of the semialgebraic relaxations at the price of coarsening the maxplus approximations. In that way, we arrive at a new - template based - certified global optimization method, which exploits both the precision of sums of squares relaxations and the scalability of abstraction methods. We analyze the performance of the method on problems from the global optimization literature, as well as medium-size inequalities issued from the Flyspeck project.Comment: 27 pages, 3 figures, 4 table
    corecore