20 research outputs found

    Constant Depth Decision Rules for multistage optimization under uncertainty

    Full text link
    In this paper, we introduce a new class of decision rules, referred to as Constant Depth Decision Rules (CDDRs), for multistage optimization under linear constraints with uncertainty-affected right-hand sides. We consider two uncertainty classes: discrete uncertainties which can take at each stage at most a fixed number d of different values, and polytopic uncertainties which, at each stage, are elements of a convex hull of at most d points. Given the depth mu of the decision rule, the decision at stage t is expressed as the sum of t functions of mu consecutive values of the underlying uncertain parameters. These functions are arbitrary in the case of discrete uncertainties and are poly-affine in the case of polytopic uncertainties. For these uncertainty classes, we show that when the uncertain right-hand sides of the constraints of the multistage problem are of the same additive structure as the decision rules, these constraints can be reformulated as a system of linear inequality constraints where the numbers of variables and constraints is O(1)(n+m)d^mu N^2 with n the maximal dimension of control variables, m the maximal number of inequality constraints at each stage, and N the number of stages. As an illustration, we discuss an application of the proposed approach to a Multistage Stochastic Program arising in the problem of hydro-thermal production planning with interstage dependent inflows. For problems with a small number of stages, we present the results of a numerical study in which optimal CDDRs show similar performance, in terms of optimization objective, to that of Stochastic Dual Dynamic Programming (SDDP) policies, often at much smaller computational cost

    Hydropower Aggregation by Spatial Decomposition – an SDDP Approach

    Get PDF
    The balance between detailed technical description, representation of uncertainty and computational complexity is central in long-term scheduling models applied to hydro-dominated power system. The aggregation of complex hydropower systems into equivalent energy representations (EER) is a commonly used technique to reduce dimensionality and computation time in scheduling models. This work presents a method for coordinating the EERs with their detailed hydropower system representation within a model based on stochastic dual dynamic programming (SDDP). SDDP is applied to an EER representation of the hydropower system, where feasibility cuts derived from optimization of the detailed hydropower are used to constrain the flexibility of the EERs. These cuts can be computed either before or during the execution of the SDDP algorithm and allow system details to be captured within the SDDP strategies without compromising the convergence rate and state-space dimensionality. Results in terms of computational performance and system operation are reported from a test system comprising realistic hydropower watercourses.Hydropower Aggregation by Spatial Decomposition – an SDDP ApproachacceptedVersio

    Risk neutral and risk averse stochastic optimization

    Get PDF
    In this thesis, we focus on the modeling, computational methods and applications of multistage/single-stage stochastic optimization, which entail risk aversion under certain circumstances. Chapters 2-4 concentrate on multistage stochastic programming while Chapter 5-6 deal with a class of single-stage functional constrained stochastic optimization problems. First, we investigate the deterministic upper bound of a Multistage Stochastic Linear Program (MSLP). We first present the Dual SDDP algorithm, which solves the Dynamic Programming equations for the dual and computes a sequence of nonincreasing deterministic upper bounds for the optimal value of the problem, even without the presence of Relatively Complete Recourse (RCR) condition. We show that optimal dual solutions can be obtained using Primal SDDP when computing the duals of the subproblems in the backward pass. As a byproduct, we study the sensitivity of the optimal value as a function of the involved problem parameters. In particular, we provide formulas for the derivatives of the value function with respect to the parameters and illustrate their application on an inventory problem. Next, we extend to the infinite-horizon MSLP and show how to construct a deterministic upper bound (dual bound) via the proposed Periodical Dual SDDP. Finally, as a proof of concept of the developed tools, we present the numerical results of (1) the sensitivity of the optimal value as a function of the demand process parameters; (2) conduct Dual SDDP on the inventory and the Brazilian hydro-thermal planning problems under both finite-horizon and infinite-horizon settings. Third, we discuss sample complexity of solving stationary stochastic programs by the Sample Average Approximation (SAA) method. We investigate this in the framework of Stochastic Optimal Control (in discrete time) setting. In particular we derive a Central Limit Theorem type asymptotics for the optimal values of the SAA problems. The main conclusion is that the sample size, required to attain a given relative error of the SAA solution, is not sensitive to the discount factor, even if the discount factor is very close to one. We consider the risk neutral and risk averse settings. The presented numerical experiments confirm the theoretical analysis. Fourth, we propose a novel projection-free method, referred to as Level Conditional Gradient (LCG) method, for solving convex functional constrained optimization. Different from the constraint-extrapolated conditional gradient type methods (CoexCG and CoexDurCG), LCG, as a primal method, does not assume the existence of an optimal dual solution, thus improving the convergence rate of CoexCG/CoexDurCG by eliminating the dependence on the magnitude of the optimal dual solution. Similar to existing level-set methods, LCG uses an approximate Newton method to solve a root-finding problem. In each approximate Newton update, LCG calls a conditional gradient oracle (CGO) to solve a saddle point subproblem. The CGO developed herein employs easily computable lower and upper bounds on these saddle point problems. We establish the iteration complexity of the CGO for solving a general class of saddle point optimization. Using these results, we show that the overall iteration complexity of the proposed LCG method is O(1ϵ2log(1ϵ))\mathcal{O}\left(\frac{1}{\epsilon^2}\log(\frac{1}{\epsilon})\right) for finding an ϵ\epsilon-optimal and ϵ\epsilon-feasible solution of the considered problem. To the best of our knowledge, LCG is the first primal conditional gradient method for solving convex functional constrained optimization. For the subsequently developed nonconvex algorithms in this thesis, LCG can also serve as a subroutine or provide high-quality starting points that expedites the solution process. Last, to cope with the nonconvex functional constrained optimization problems, we develop three approaches: the Level Exact Proximal Point (EPP-LCG) method, the Level Inexact Proximal Point (IPP-LCG) method and the Direct Nonconvex Conditional Gradient (DNCG) method. The proposed EPP-LCG and IPP-LCG methods utilize the proximal point framework and solve a series of convex subproblems. By solving each subproblem, they leverage the proposed LCG method, thus averting the effect from large Lagrangian multipliers. We show that the iteration complexity of the algorithms is bounded by O(1ϵ3log(1ϵ))\mathcal{O}\left(\frac{1}{\epsilon^3}\log(\frac{1}{\epsilon})\right) in order to obtain an (approximate) KKT point. However, the proximal-point type methods have triple-layer structure and may not be easily implementable. To alleviate the issue, we also propose the DNCG method, which is the first single-loop projection-free algorithm for solving nonconvex functional constrained problem in the literature. This algorithm provides a drastically simpler framework as it only contains three updates in one loop. We show that the iteration complexity to find an ϵ\epsilon-Wolfe point is bounded by O(1/ϵ4)\mathcal{O}\big(1/{\epsilon^4}\big). To the best of our knowledge, all these developments are new for projection-free methods for nonconvex optimization. We demonstrate the effectiveness of the proposed nonconvex projection-free methods on a portfolio selection problem and the intensity modulated radiation therapy treatment planning problem. Moreover, we compare the results with the LCG method proposed in Chapter \ref{chapter-noncvx}. The outcome of the numerical study shows all methods are efficient in jointly minimizing risk while promoting sparsity in a rather short computational time for the real-world and large-scale datasets.Ph.D
    corecore