7,540 research outputs found

    Lagrangean decomposition for large-scale two-stage stochastic mixed 0-1 problems

    Get PDF
    In this paper we study solution methods for solving the dual problem corresponding to the Lagrangean Decomposition of two stage stochastic mixed 0-1 models. We represent the two stage stochastic mixed 0-1 problem by a splitting variable representation of the deterministic equivalent model, where 0-1 and continuous variables appear at any stage. Lagrangean Decomposition is proposed for satisfying both the integrality constraints for the 0-1 variables and the non-anticipativity constraints. We compare the performance of four iterative algorithms based on dual Lagrangean Decomposition schemes, as the Subgradient method, the Volume algorithm, the Progressive Hedging algorithm and the Dynamic Constrained Cutting Plane scheme. We test the conditions and properties of convergence for medium and large-scale dimension stochastic problems. Computational results are reported.Progressive Hedging algorithm, volume algorithm, Lagrangean decomposition, subgradient method

    Price decomposition in large-scale stochastic optimal control

    Get PDF
    We are interested in optimally driving a dynamical system that can be influenced by exogenous noises. This is generally called a Stochastic Optimal Control (SOC) problem and the Dynamic Programming (DP) principle is the natural way of solving it. Unfortunately, DP faces the so-called curse of dimensionality: the complexity of solving DP equations grows exponentially with the dimension of the information variable that is sufficient to take optimal decisions (the state variable). For a large class of SOC problems, which includes important practical problems, we propose an original way of obtaining strategies to drive the system. The algorithm we introduce is based on Lagrangian relaxation, of which the application to decomposition is well-known in the deterministic framework. However, its application to such closed-loop problems is not straightforward and an additional statistical approximation concerning the dual process is needed. We give a convergence proof, that derives directly from classical results concerning duality in optimization, and enlghten the error made by our approximation. Numerical results are also provided, on a large-scale SOC problem. This idea extends the original DADP algorithm that was presented by Barty, Carpentier and Girardeau (2010)
    • …
    corecore