24,983 research outputs found

    On the Impact of Numerical Accuracy Optimization on General Performances of Programs

    Get PDF
    The floating-point numbers used in computer programs are a finite approximation of real numbers. In practice, this approximation may introduce round-off errors and this can lead to catastrophic results. In previous work, we have proposed intraprocedural and interprocedural program transformations for numerical accuracy optimization. All these transformations have been implemented in our tool, Salsa. The experimental results applied on various programs either coming from embedded systems or numerical methods, show the efficiency of the transformation in terms of numerical accuracy improvement but also in terms of other criteria such as execution time and code size. This article studies the impact of program transformations for numerical accuracy specially in embedded systems on other efficiency parameters such as execution time, code size and accuracy of the other variables (these which are not chosen for optimization)

    Scalable First-Order Methods for Robust MDPs

    Full text link
    Robust Markov Decision Processes (MDPs) are a powerful framework for modeling sequential decision-making problems with model uncertainty. This paper proposes the first first-order framework for solving robust MDPs. Our algorithm interleaves primal-dual first-order updates with approximate Value Iteration updates. By carefully controlling the tradeoff between the accuracy and cost of Value Iteration updates, we achieve an ergodic convergence rate of O(A2S3log(S)log(ϵ1)ϵ1)O \left( A^{2} S^{3}\log(S)\log(\epsilon^{-1}) \epsilon^{-1} \right) for the best choice of parameters on ellipsoidal and Kullback-Leibler ss-rectangular uncertainty sets, where SS and AA is the number of states and actions, respectively. Our dependence on the number of states and actions is significantly better (by a factor of O(A1.5S1.5)O(A^{1.5}S^{1.5})) than that of pure Value Iteration algorithms. In numerical experiments on ellipsoidal uncertainty sets we show that our algorithm is significantly more scalable than state-of-the-art approaches. Our framework is also the first one to solve robust MDPs with ss-rectangular KL uncertainty sets

    DC Proximal Newton for Non-Convex Optimization Problems

    Get PDF
    We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure sufficient descent. A theoretical analysis is provided showing that the iterates of the proposed algorithm {admit} as limit points stationary points of the DC objective function. Numerical experiments show that our approach is more efficient than current state of the art for a problem with a convex loss functions and non-convex regularizer. We have also illustrated the benefit of our algorithm in high-dimensional transductive learning problem where both loss function and regularizers are non-convex

    Regularized Nonlinear Acceleration

    Get PDF
    We describe a convergence acceleration technique for unconstrained optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the base algorithm, providing improved estimates of the solution on the fly, while the original optimization method is running. Numerical experiments are detailed on classical classification problems
    corecore