9 research outputs found

    An improved partial bundle method for linearly constrained minimax problems

    Full text link

    A derivative-free approximate gradient sampling algorithm for finite minimax problems

    No full text
    Mathematical optimization is the process of minimizing (or maximizing) a function. An algorithm is used to optimize a function when the minimum cannot be found by hand, or finding the minimum by hand is inefficient. The minimum of a function is a critical point and corresponds to a gradient (derivative) of 0. Thus, optimization algorithms commonly require gradient calculations. When gradient information of the objective function is unavailable, unreliable or ‘expensive’ in terms of computation time, a derivative-free optimization algorithm is ideal. As the name suggests, derivative-free optimization algorithms do not require gradient calculations. In this thesis, we present a derivative-free optimization algorithm for finite minimax problems. Structurally, a finite minimax problem minimizes the maximum taken over a finite set of functions. We focus on the finite minimax problem due to its frequent appearance in real-world applications. We present convergence results for a regular and a robust version of our algorithm, showing in both cases that either the function is unbounded below (the minimum is −∞) or we have found a critical point. Theoretical results are explored for stopping conditions. Additionally, theoretical and numerical results are presented for three examples of approximate gradients that can be used in our algorithm: the simplex gradient, the centered simplex gradient and the Gupal estimate of the gradient of the Steklov averaged function. A performance comparison is made between the regular and robust algorithm, the three approximate gradients, and the regular and robust stopping conditions. Finally, an application in seismic retrofitting is discussed.Arts and Sciences, Irving K. Barber School of (Okanagan)Computer Science, Mathematics, Physics and Statistics, Department of (Okanagan)Graduat

    Quantum Speed-ups for Boolean Satisfiability and Derivative-Free Optimization

    Get PDF
    In this thesis, we have considered two important problems, Boolean satisfiability (SAT) and derivative free optimization in the context of large scale quantum computers. In the first part, we survey well known classical techniques for solving satisfiability. We compute the approximate time it would take to solve SAT instances using quantum techniques and compare it with state-of-the heart classical heuristics employed annually in SAT competitions. In the second part of the thesis, we consider a few classically well known algorithms for derivative free optimization which are ubiquitously employed in engineering problems. We propose a quantum speedup to this classical algorithm by using techniques of the quantum minimum finding algorithm. In the third part of the thesis, we consider practical applications in the fields of bio-informatics, petroleum refineries and civil engineering which involve solving either satisfiability or derivative free optimization. We investigate if using known quantum techniques to speedup these algorithms directly translate to the benefit of industries which invest in technology to solve these problems. In the last section, we propose a few open problems which we feel are immediate hurdles, either from an algorithmic or architecture perspective to getting a convincing speedup for the practical problems considered

    Randomized Algorithms for Nonconvex Nonsmooth Optimization

    Get PDF
    Nonsmooth optimization problems arise in a variety of applications including robust control, robust optimization, eigenvalue optimization, compressed sensing, and decomposition methods for large-scale or complex optimization problems. When convexity is present, such problems are relatively easier to solve. Optimization methods for convex nonsmooth optimization have been studied for decades. For example, bundle methods are a leading technique for convex nonsmooth minimization. However, these and other methods that have been developed for solving convex problems are either inapplicable or can be inefficient when applied to solve nonconvex problems. The motivation of the work in this thesis is to design robust and efficient algorithms for solving nonsmooth optimization problems, particularly when nonconvexity is present.First, we propose an adaptive gradient sampling (AGS) algorithm, which is based on a recently developed technique known as the gradient sampling (GS) algorithm. Our AGS algorithm improves the computational efficiency of GS in critical ways. Then, we propose a BFGS gradient sampling (BFGS-GS) algorithm, which is a hybrid between a standard Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the GS method. Our BFGS-GS algorithm is more efficient than our previously proposed AGS algorithm and also competitive with (and in some ways outperforms) other contemporary solvers for nonsmooth nonconvex optimization. Finally, we propose a few additional extensions of the GS framework---one in which we merge GS ideas with those from bundle methods, one in which we incorporate smoothing techniques in order to minimize potentially non-Lipschitz objective functions, and one in which we tailor GS methods for solving regularization problems. We describe all the proposed algorithms in detail. In addition, for all the algorithm variants, we prove global convergence guarantees under suitable assumptions. Moreover, we perform numerical experiments to illustrate the efficiency of our algorithms. The test problems considered in our experiments include academic test problems as well as practical problems that arise in applications of nonsmooth optimization

    Random Models in Nonlinear Optimization

    Get PDF
    In recent years, there has been a tremendous increase in the interest of applying techniques of deterministic optimization to stochastic settings, largely motivated by problems that come from machine learning domains. A natural question that arises in light of this interest is the extent to which iterative algorithms designed for deterministic (nonlinear, possibly non-convex) optimization must be modified in order to properly make use of inherently random information about a problem.This thesis is concerned with exactly this question, and adapts the model-based trust-region framework of derivative-free optimization (DFO) for use in situations where objective function values or the set of points selected by an algorithm to be objectively evaluated are random.In the first part of this thesis, we consider an algorithmic framework called STORM (STochastic Optimization with Random Models), which as an iterative method, is essentially identical to model-based trust-region methods for smooth DFO. However, by imposing fairly general probabilistic conditions related to the concept of fully-linearity on objective function models and objective function estimates, we prove that iterates of algorithms in the STORM framework exhibit almost sure convergence to first-order stationary points for a broad class of unconstrained stochastic functions. We then show that algorithms in the STORM framework enjoy the canonical rate of convergence for unconstrained non-convex optimization. Throughout the thesis, examples are provided demonstrating how the mentioned probabilistic conditions might be satisfied through particular choices of model-building and function value estimation.In the second part of the thesis, we consider a framework called manifold sampling, intended for unconstrained DFO problems where the objective is nonsmooth, but enough is known a priori about the structure of the nonsmoothness that one can classify a given queried point as belonging to a certain smooth manifold of the objective surface. We particularly examine the case of sums of absolute values of (non-convex) black-box functions. Although we assume in this work that the individual black-box functions can be deterministically evaluated, we consider a variant of manifold sampling wherein random queries are made in each iteration to enhance the algorithm\u27s ``awareness of the diversity of manifolds in a neighborhood of a current iterate. We then combine the ideas of STORM and manifold sampling to yield a practical algorithm intended for non-convex â„“1\ell_1-regularized empirical risk minimization
    corecore