15 research outputs found

    On the accuracy of the estimated policy function using the Bellman contraction method

    Get PDF
    In this paper we show that the approximation error of the optimal policy function in the stochastic dynamic programing problem using the policies defined by the Bellman contraction method is lower than a constant (which depends on the modulus of strong concavity of the one-period return function) times the square root of the value function approximation error. Since the Bellman's method is a contraction it results that we can control the approximation error of the policy function. This method for estimating the approximation error is robust under small numerical errors in the computation of value and policy functions.

    A weakly convergent fully inexact Douglas-Rachford method with relative error tolerance

    No full text
    Douglas-Rachford method is a splitting algorithm for finding a zero of the sum of two maximal monotone operators. Each of its iterations requires the sequential solution of two proximal subproblems. The aim of this work is to present a fully inexact version of Douglas-Rachford method wherein both proximal subproblems are solved approximately within a relative error tolerance. We also present a semi-inexact variant in which the first subproblem is solved exactly and the second one inexactly. We prove that both methods generate sequences weakly convergent to the solution of the underlying inclusion problem, if any

    A Truly Globally Convergent Newton-Type Method for the Monotone Nonlinear Complementarity Problem

    No full text
    International audiencehe Josephy--Newton method for solving a nonlinear complementarity problem consists of solving, possibly inexactly, a sequence of linear complementarity problems. Under appropriate regularityassumptions, this method is known to be locally (superlinearly) convergent. To enlarge the domain of convergence of the Newton method, some globalization strategy based on a chosen merit function is typically used. However, to ensure global convergence to a solution, some additional restrictive assumptions are needed. These assumptions imply boundedness of level sets of the merit function and often even (global) uniqueness of the solution. We present a new globalization strategy for monotone problems which is not based on any merit function. Our linesearch procedure utilizes the regularized Newton direction and the monotonicity structure of the problem to force global convergence by means of a (computationally explicit) projection step which reduces the distance to the solution set of the problem. The resulting algorithm is truly globally convergent in the sense that the subproblems are always solvable, and the whole sequence of iterates converges to a solution of the problem without any regularity assumptions. In fact, the solution set can even be unbounded. Each iteration of the new method has the same order of computational cost as an iteration of the damped Newton method. Under natural assumptions, the local superlinear rate of convergence is also achieved

    Steepest descent methods for multicriteria optimization

    No full text

    Optimal auction with a general distribution: Virtual valuation without densities

    No full text
    We characterize the optimal auction in an independent private values framework for a completely general distribution of valuations. We do this introducing a new concept: the generalized virtual valuation. To show the wider applicability of this concept we present two examples showing how to extend the classical models of Mussa and Rosen and Baron and Myerson for arbitrary distributions.Optimal auction Independent private values Virtual valuation Ironing
    corecore