18 research outputs found

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Nonsmooth Optimization; Proceedings of an IIASA Workshop, March 28 - April 8, 1977

    Get PDF
    Optimization, a central methodological tool of systems analysis, is used in many of IIASA's research areas, including the Energy Systems and Food and Agriculture Programs. IIASA's activity in the field of optimization is strongly connected with nonsmooth or nondifferentiable extreme problems, which consist of searching for conditional or unconditional minima of functions that, due to their complicated internal structure, have no continuous derivatives. Particularly significant for these kinds of extreme problems in systems analysis is the strong link between nonsmooth or nondifferentiable optimization and the decomposition approach to large-scale programming. This volume contains the report of the IIASA workshop held from March 28 to April 8, 1977, entitled Nondifferentiable Optimization. However, the title was changed to Nonsmooth Optimization for publication of this volume as we are concerned not only with optimization without derivatives, but also with problems having functions for which gradients exist almost everywhere but are not continous, so that the usual gradient-based methods fail. Because of the small number of participants and the unusual length of the workshop, a substantial exchange of information was possible. As a result, details of the main developments in nonsmooth optimization are summarized in this volume, which might also be considered a guide for inexperienced users. Eight papers are presented: three on subgradient optimization, four on descent methods, and one on applicability. The report also includes a set of nonsmooth optimization test problems and a comprehensive bibliography

    Convergence Analysis and Improvements for Projection Algorithms and Splitting Methods

    Get PDF
    Non-smooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These first-order optimization algorithms are often simple, well suited to solve large-scale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the per-iteration cost is usually low, the number of iterations can become prohibitively large for ill-conditioned problems, especially if a high accuracy solution is sought.In this thesis, a few methods for alleviating this slow convergence are studied, which can be divided into two main approaches. The first are heuristic methods that can be applied to a range of fixed-point algorithms. They are based on understanding typical behavior of these algorithms. While these methods are shown to converge, they come with no guarantees on improved convergence rates.The other approach studies the theoretical rates of a class of projection methods that are used to solve convex feasibility problems. These are problems where the goal is to find a point in the intersection of two, or possibly more, convex sets. A study of how the parameters in the algorithm affect the theoretical convergence rate is presented, as well as how they can be chosen to optimize this rate

    Efficient and Flexible First-Order Optimization Algorithms

    Get PDF
    Optimization problems occur in many areas in science and engineering. When the optimization problem at hand is of large-scale, the computational cost of the optimization algorithm is a main concern. First-order optimization algorithms—in which updates are performed using only gradient or subgradient of the objective function—have low per-iteration computational cost, which make them suitable for tackling large-scale optimization problems. Even though the per-iteration computational cost of these methods is reasonably low, the number of iterations needed for finding a solution—especially if medium or high accuracy is needed—can in practice be very high; as a result, the overall computational cost of using these methods would still be high. This thesis focuses on one of the most widely used first-order optimization algorithms, namely, the forward–backward splitting algorithm, and attempts to improve its performance. To that end, this thesis proposes novel first-order optimization algorithms which all are built upon the forward–backward method. An important feature of the proposed methods is their flexibility. Using the flexibility of the proposed algorithms along with the safeguarding notion, this thesis provides a framework through which many new and efficient optimization algorithms can be developed. To improve efficiency of the forward–backward algorithm, two main approaches are taken in this thesis. In the first one, a technique is proposed to adjust the point at which the forward–backward operator is evaluated. This is done through including additive terms—which are called deviations—in the input argument of the forward– backward operator. The deviations then, in order to have a convergent algorithm, have to satisfy a safeguard condition at each iteration. Incorporating deviations provides great flexibility to the algorithm and paves the way for designing new and improved forward–backward-based methods. A few instances of employing this flexibility to derive new algorithms are presented in the thesis.In the second proposed approach, a globally (and potentially slow) convergent algorithm can be combined with a fast and locally convergent one to form an efficient optimization scheme. The role of the globally convergent method is to ensure convergence of the overall scheme. The fast local algorithm’s role is to speed up the convergence; this is done by switching from the globally convergent algorithm to the local one whenever it is safe, i.e., when a safeguard condition is satisfied. This approach, which allows for combining different global and local algorithms within its framework, can result in fast and globally convergent optimization schemes

    Multistage quadratic stochastic programming

    Full text link
    Multistage stochastic programming is an important tool in medium to long term planning where there are uncertainties in the data. In this thesis, we consider a special case of multistage stochastic programming in which each subprogram is a convex quadratic program. The results are also applicable if the quadratic objectives are replaced by convex piecewise quadratic functions. Convex piecewise quadratic functions have important application in financial planning problems as they can be used as very flexible risk measures. The stochastic programming problems can be used as multi-period portfolio planning problems tailored to the need of individual investors. Using techniques from convex analysis and sensitivity analysis, we show that each subproblem of a multistage quadratic stochastic program is a polyhedral piecewise quadratic program with convex Lipschitz objective. The objective of any subproblem is differentiable with Lipschitz gradient if all its descendent problems have unique dual variables, which can be guaranteed if the linear independence constraint qualification is satisfied. Expression for arbitrary elements of the subdifferential and generalized Hessian at a point can be calculated for quadratic pieces that are active at the point. Generalized Newton methods with linesearch are proposed for solving multistage quadratic stochastic programs. The algorithms converge globally. If the piecewise quadratic objective is differentiable and strictly convex at the solution, then convergence is also finite. A generalized Newton algorithm is implemented in Matlab. Numerical experiments have been carried out to demonstrate its effectiveness. The algorithm is tested on random data with 3, 4 and 5 stages with a maximum of 315 scenarios. The algorithm has also been successfully applied to two sets of test data from a capacity expansion problem and a portfolio management problem. Various strategies have been implemented to improve the efficiency of the proposed algorithm. We experimented with trust region methods with different parameters, using an advanced solution from a smaller version of the original problem and sorting the stochastic right hand sides to encourage faster convergence. The numerical results show that the proposed generalized Newton method is a highly accurate and effective method for multistage quadratic stochastic programs. For problems with the same number of stages, solution times increase linearly with the number of scenarios

    Optimization and Applications

    Get PDF
    Proceedings of a workshop devoted to optimization problems, their theory and resolution, and above all applications of them. The topics covered existence and stability of solutions; design, analysis, development and implementation of algorithms; applications in mechanics, telecommunications, medicine, operations research

    Numerical Techniques for Stochastic Optimization

    Get PDF
    This is a comprehensive and timely overview of the numerical techniques that have been developed to solve stochastic programming problems. After a brief introduction to the field, where accent is laid on modeling questions, the next few chapters lay out the challenges that must be met in this area. They also provide the background for the description of the computer implementations given in the third part of the book. Selected applications are described next. Some of these have directly motivated the development of the methods described in the earlier chapters. They include problems that come from facilities location, exploration investments, control of ecological systems, energy distribution and generation. Test problems are collected in the last chapter. This is the first book devoted to this subject. It comprehensively covers all major advances in the field (both Western and Soviet). It is only because of the recent developments in computer technology, that we have now reached a point where our computing power matches the inherent size requirements faced in this area. The book demonstrates that a large class of stochastic programming problems are now in the range of our numerical capacities
    corecore