2,455 research outputs found

    Non-convex nested Benders decomposition

    Get PDF
    We propose a new decomposition method to solve multistage non-convex mixed-integer (stochastic) nonlinear programming problems (MINLPs). We call this algorithm non-convex nested Benders decomposition (NC-NBD). NC-NBD is based on solving dynamically improved mixed-integer linear outer approximations of the MINLP, obtained by piecewise linear relaxations of nonlinear functions. Those MILPs are solved to global optimality using an enhancement of nested Benders decomposition, in which regularization, dynamically refined binary approximations of the state variables and Lagrangian cut techniques are combined to generate Lipschitz continuous non-convex approximations of the value functions. Those approximations are then used to decide whether the approximating MILP has to be dynamically refined and in order to compute feasible solutions for the original MINLP. We prove that NC-NBD converges to an -optimal solution in a finite number of steps. We provide promising computational results for some unit commitment problems of moderate size

    Time and nodal decomposition with implicit non-anticipativity constraints in dynamic portfolio optimization

    Get PDF
    We propose a decomposition method for the solution of a dynamic portfolio optimization problem which fits the formulation of a multistage stochastic programming problem. The method allows to obtain time and nodal decomposition of the problem in its arborescent formulation applying a discrete version of Pontryagin Maximum Principle. The solution of the decomposed problems is coordinated through a fixed- point weighted iterative scheme. The introduction of an optimization step in the choice of the weights at each iteration allows to solve the original problem in a very efficient way.Stochastic programming, Discrete time optimal control problem, Iterative scheme, Portfolio optimization

    Combining stochastic programming and optimal control to solve multistage stochastic optimization problems

    Get PDF
    In this contribution we propose an approach to solve a multistage stochastic programming problem which allows us to obtain a time and nodal decomposition of the original problem. This double decomposition is achieved applying a discrete time optimal control formulation to the original stochastic programming problem in arborescent form. Combining the arborescent formulation of the problem with the point of view of the optimal control theory naturally gives as a first result the time decomposability of the optimality conditions, which can be organized according to the terminology and structure of a discrete time optimal control problem into the systems of equation for the state and adjoint variables dynamics and the optimality conditions for the generalized Hamiltonian. Moreover these conditions, due to the arborescent formulation of the stochastic programming problem, further decompose with respect to the nodes in the event tree. The optimal solution is obtained by solving small decomposed subproblems and using a mean valued fixed-point iterative scheme to combine them. To enhance the convergence we suggest an optimization step where the weights are chosen in an optimal way at each iteration.Stochastic programming, discrete time control problem, decomposition methods, iterative scheme

    From Uncertainty Data to Robust Policies for Temporal Logic Planning

    Full text link
    We consider the problem of synthesizing robust disturbance feedback policies for systems performing complex tasks. We formulate the tasks as linear temporal logic specifications and encode them into an optimization framework via mixed-integer constraints. Both the system dynamics and the specifications are known but affected by uncertainty. The distribution of the uncertainty is unknown, however realizations can be obtained. We introduce a data-driven approach where the constraints are fulfilled for a set of realizations and provide probabilistic generalization guarantees as a function of the number of considered realizations. We use separate chance constraints for the satisfaction of the specification and operational constraints. This allows us to quantify their violation probabilities independently. We compute disturbance feedback policies as solutions of mixed-integer linear or quadratic optimization problems. By using feedback we can exploit information of past realizations and provide feasibility for a wider range of situations compared to static input sequences. We demonstrate the proposed method on two robust motion-planning case studies for autonomous driving
    corecore