366 research outputs found

    EM algorithms without missing data

    Full text link
    Most problems in computational statistics involve optimization of an objective function such as a loglikelihood, a sum of squares, or a log posterior function. The EM algorithm is one of the most effective algorithms for maximization because it iteratively transfers maximization from a complex function to a simple, surrogate function. This theoretical perspective clarifies the operation of the EM algorithm and suggests novel generalizations. Besides simplifying maximization, optimization transfer usually leads to highly stable algorithms with well-understood local and global convergence properties. Although convergence can be excruciatingly slow, various devices exist for accelerating it. Beginning with the EM algorithm, we review in this paper several optimization transfer algorithms of substantial utility in medical statistics.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/68889/2/10.1177_096228029700600104.pd

    A survey on numerical methods for unconstrained optimization problems.

    Get PDF
    by Chung Shun Shing.Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.Includes bibliographical references (leaves 158-170).Abstracts in English and Chinese.List of Figures --- p.xChapter 1 --- Introduction --- p.1Chapter 1.1 --- Background and Historical Development --- p.1Chapter 1.2 --- Practical Problems --- p.3Chapter 1.2.1 --- Statistics --- p.3Chapter 1.2.2 --- Aerodynamics --- p.4Chapter 1.2.3 --- Factory Allocation Problem --- p.5Chapter 1.2.4 --- Parameter Problem --- p.5Chapter 1.2.5 --- Chemical Engineering --- p.5Chapter 1.2.6 --- Operational Research --- p.6Chapter 1.2.7 --- Economics --- p.6Chapter 1.3 --- Mathematical Models for Optimization Problems --- p.6Chapter 1.4 --- Unconstrained Optimization Techniques --- p.8Chapter 1.4.1 --- Direct Method - Differential Calculus --- p.8Chapter 1.4.2 --- Iterative Methods --- p.10Chapter 1.5 --- Main Objectives of the Thesis --- p.11Chapter 2 --- Basic Concepts in Optimizations of Smooth Func- tions --- p.14Chapter 2.1 --- Notation --- p.14Chapter 2.2 --- Different Types of Minimizer --- p.16Chapter 2.3 --- Necessary and Sufficient Conditions for Optimality --- p.18Chapter 2.4 --- Quadratic Functions --- p.22Chapter 2.5 --- Convex Functions --- p.24Chapter 2.6 --- "Existence, Uniqueness and Stability of a Minimum" --- p.29Chapter 2.6.1 --- Existence of a Minimum --- p.29Chapter 2.6.2 --- Uniqueness of a Minimum --- p.30Chapter 2.6.3 --- Stability of a Minimum --- p.31Chapter 2.7 --- Types of Convergence --- p.34Chapter 2.8 --- Minimization of Functionals --- p.35Chapter 3 --- Steepest Descent Method --- p.37Chapter 3.1 --- Background --- p.37Chapter 3.2 --- Line Search Method and the Armijo Rule --- p.39Chapter 3.3 --- Steplength Control with Polynomial Models --- p.43Chapter 3.3.1 --- Quadratic Polynomial Model --- p.43Chapter 3.3.2 --- Safeguarding --- p.45Chapter 3.3.3 --- Cubic Polynomial Model --- p.46Chapter 3.3.4 --- General Line Search Strategy --- p.49Chapter 3.3.5 --- Algorithm of Steepest Descent Method --- p.51Chapter 3.4 --- Advantages of the Armijo Rule --- p.54Chapter 3.5 --- Convergence Analysis --- p.56Chapter 4 --- Iterative Methods Using Second Derivatives --- p.63Chapter 4.1 --- Background --- p.63Chapter 4.2 --- Newton's Method --- p.64Chapter 4.2.1 --- Basic Concepts --- p.64Chapter 4.2.2 --- Convergence Analysis of Newton's Method --- p.65Chapter 4.2.3 --- Newton's Method with Steplength --- p.69Chapter 4.2.4 --- Convergence Analysis of Newton's Method with Step-length --- p.70Chapter 4.3 --- Greenstadt's Method --- p.72Chapter 4.4 --- Marquardt-Levenberg Method --- p.74Chapter 4.5 --- Fiacco and McComick Method --- p.76Chapter 4.6 --- Matthews and Davies Method --- p.79Chapter 4.7 --- Numerically Stable Modified Newton's Method --- p.80Chapter 4.8 --- The Role of the Second Derivative Methods --- p.89Chapter 5 --- Multi-step Methods --- p.92Chapter 5.1 --- Background --- p.93Chapter 5.2 --- Heavy Ball Method --- p.94Chapter 5.3 --- Conjugate Gradient Method --- p.99Chapter 5.3.1 --- Some Types of Conjugate Gradient Method --- p.99Chapter 5.3.2 --- Convergence Analysis of Conjugate Gradient Method --- p.108Chapter 5.4 --- Methods of Variable Metric and Methods of Conju- gate Directions --- p.111Chapter 5.5 --- Other Approaches for Constructing the First-order Methods --- p.116Chapter 6 --- Quasi-Newton Methods --- p.121Chapter 6.1 --- Disadvantages of Newton's Method --- p.122Chapter 6.2 --- General Idea of Quasi-Newton Method --- p.124Chapter 6.2.1 --- Quasi-Newton Methods --- p.124Chapter 6.2.2 --- Convergence of Quasi-Newton Methods --- p.129Chapter 6.3 --- Properties of Quasi-Newton Methods --- p.131Chapter 6.4 --- Some Particular Algorithms for Quasi-Newton Methods --- p.137Chapter 6.4.1 --- Single-Rank Algorithms --- p.137Chapter 6.4.2 --- Double-Rank Algorithms --- p.144Chapter 6.4.3 --- Other Applications --- p.149Chapter 6.5 --- Conclusion --- p.152Chapter 7 --- Choice of Methods in Optimization Problems --- p.154Chapter 7.1 --- Choice of Methods --- p.154Chapter 7.2 --- Conclusion --- p.157Bibliography --- p.15

    Historical development of the BFGS secant method and its characterization properties

    Get PDF
    The BFGS secant method is the preferred secant method for finite-dimensional unconstrained optimization. The first part of this research consists of recounting the historical development of secant methods in general and the BFGS secant method in particular. Many people believe that the secant method arose from Newton's method using finite difference approximations to the derivative. We compile historical evidence revealing that a special case of the secant method predated Newton's method by more than 3000 years. We trace the evolution of secant methods from 18th-century B.C. Babylonian clay tablets and the Egyptian Rhind Papyrus. Modifications to Newton's method yielding secant methods are discussed and methods we believe influenced and led to the construction of the BFGS secant method are explored. In the second part of our research, we examine the construction of several rank-two secant update classes that had not received much recognition in the literature. Our study of the underlying mathematical principles and characterizations inherent in the updates classes led to theorems and their proofs concerning secant updates. One class of symmetric rank-two updates that we investigate is the Dennis class. We demonstrate how it can be derived from the general rank-one update formula in a purely algebraic manner not utilizing Powell's method of iterated projections as Dennis did it. The literature abounds with update classes; we show how some are related and show containment when possible. We derive the general formula that could be used to represent all symmetric rank-two secant updates. From this, particular parameter choices yielding well-known updates and update classes are presented. We include two derivations of the Davidon class and prove that it is a maximal class. We detail known characterization properties of the BFGS secant method and describe new characterizations of several secant update classes known to contain the BFGS update. Included is a formal proof of the conjecture made by Schnabel in his 1977 Ph.D. thesis that the BFGS update is in some asymptotic sense the average of the DFP update and the Greenstadt update

    Nonsmooth Optimization; Proceedings of an IIASA Workshop, March 28 - April 8, 1977

    Get PDF
    Optimization, a central methodological tool of systems analysis, is used in many of IIASA's research areas, including the Energy Systems and Food and Agriculture Programs. IIASA's activity in the field of optimization is strongly connected with nonsmooth or nondifferentiable extreme problems, which consist of searching for conditional or unconditional minima of functions that, due to their complicated internal structure, have no continuous derivatives. Particularly significant for these kinds of extreme problems in systems analysis is the strong link between nonsmooth or nondifferentiable optimization and the decomposition approach to large-scale programming. This volume contains the report of the IIASA workshop held from March 28 to April 8, 1977, entitled Nondifferentiable Optimization. However, the title was changed to Nonsmooth Optimization for publication of this volume as we are concerned not only with optimization without derivatives, but also with problems having functions for which gradients exist almost everywhere but are not continous, so that the usual gradient-based methods fail. Because of the small number of participants and the unusual length of the workshop, a substantial exchange of information was possible. As a result, details of the main developments in nonsmooth optimization are summarized in this volume, which might also be considered a guide for inexperienced users. Eight papers are presented: three on subgradient optimization, four on descent methods, and one on applicability. The report also includes a set of nonsmooth optimization test problems and a comprehensive bibliography

    BB: An R Package for Solving a Large System of Nonlinear Equations and for Optimizing a High-Dimensional Nonlinear Objective Function

    Get PDF
    We discuss <code>R</code> package <b>BB</b>, in particular, its capabilities for solving a nonlinear system of equations. The function <code>BBsolve</code> in <b>BB</b> can be used for this purpose. We demonstrate the utility of these functions for solving: (a) large systems of nonlinear equations, (b) smooth, nonlinear estimating equations in statistical modeling, and (c) non-smooth estimating equations arising in rank-based regression modeling of censored failure time data. The function <code>BBoptim</code> can be used to solve smooth, box-constrained optimization problems. A main strength of <b>BB</b> is that, due to its low memory and storage requirements, it is ideally suited for solving high-dimensional problems with thousands of variables

    Imposing Economic Constraints in Nonparametric Regression: Survey, Implementation and Extension

    Get PDF
    Economic conditions such as convexity, homogeneity, homotheticity, and monotonicity are all important assumptions or consequences of assumptions of economic functionals to be estimated. Recent research has seen a renewed interest in imposing constraints in nonparametric regression. We survey the available methods in the literature, discuss the challenges that present themselves when empirically implementing these methods and extend an existing method to handle general nonlinear constraints. A heuristic discussion on the empirical implementation for methods that use sequential quadratic programming is provided for the reader and simulated and empirical evidence on the distinction between constrained and unconstrained nonparametric regression surfaces is covered.identification, concavity, Hessian, constraint weighted bootstrapping, earnings function

    Calculation of chemical and phase equilibria

    Get PDF
    Bibliography: pages 167-169.The computation of chemical and phase equilibria is an essential aspect of chemical engineering design and development. Important applications range from flash calculations to distillation and pyrometallurgy. Despite the firm theoretical foundations on which the theory of chemical equilibrium is based there are two major difficulties that prevent the equilibrium state from being accurately determined. The first of these hindrances is the inaccuracy or total absence of pertinent thermodynamic data. The second is the complexity of the required calculation. It is the latter consideration which is the sole concern of this dissertation

    Interior point method for linear and convex optimizations.

    Get PDF
    by Shiu-Tung Ng.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 100-103).Abstract also in Chinese.Chapter 1 --- Preliminary --- p.5Chapter 1.1 --- Linear and Convex Optimization Model --- p.5Chapter 1.2 --- Notations for Linear Optimization --- p.5Chapter 1.3 --- Definition and Properties of Convexities --- p.7Chapter 1.4 --- Useful Theorem for Unconstrained Minimization --- p.10Chapter 2 --- Linear Optimization --- p.11Chapter 2.1 --- Self-dual Linear Optimization Model --- p.11Chapter 2.2 --- Definitions and Main Theorems --- p.14Chapter 2.3 --- Self-dual Embedding and Simple Example --- p.22Chapter 2.4 --- Newton step --- p.25Chapter 2.5 --- "Rescaling and Definition of δ(xs,w)" --- p.29Chapter 2.6 --- An Interior Point Method --- p.32Chapter 2.6.1 --- Algorithm with Full Newton Steps --- p.33Chapter 2.6.2 --- Iteration Bound --- p.33Chapter 2.7 --- Background and Rounding Procedure for Interior-point Solution --- p.36Chapter 2.8 --- Solving Some LP problems --- p.42Chapter 2.9 --- Remarks --- p.51Chapter 3 --- Convex Optimization --- p.53Chapter 3.1 --- Introduction --- p.53Chapter 3.1.1 --- Convex Optimization Problem --- p.53Chapter 3.1.2 --- Idea of Interior Point Method --- p.55Chapter 3.2 --- Logarithmic Barrier Method --- p.55Chapter 3.2.1 --- Basic Concepts and Properties --- p.55Chapter 3.2.2 --- k-Self-Concordance Condition --- p.62Chapter 3.2.3 --- Short-step Logarithmic Barrier Algorithm --- p.64Chapter 3.2.4 --- Initialization Algorithm --- p.67Chapter 3.3 --- Center Method --- p.70Chapter 3.3.1 --- Basic Concepts and Properties --- p.70Chapter 3.3.2 --- Short-step Center Algorithm --- p.75Chapter 3.3.3 --- Initialization Algorithm --- p.76Chapter 3.4 --- Properties and Examples on Self-Concordance --- p.78Chapter 3.5 --- Examples of Convex Optimization Problem --- p.82Chapter 3.5.1 --- Self-concordant Logarithmic Barrier and Distance Function --- p.82Chapter 3.5.2 --- General Convex Optimization Problems --- p.91Chapter 3.6 --- Remarks --- p.98Bibliograph

    Dual methods and approximation concepts in structural synthesis

    Get PDF
    Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins

    Mixed nonderivative algorithms for unconstrained optimization

    Get PDF
    A general technique is developed to restart nonderivative algorithms in unconstrained optimization. Application of the technique is shown to result in mixed algorithms which are considerably more robust than their component procedures. A general mixed algorithm is developed and its convergence is demonstrated. A uniform computational comparison is given for the new mixed algorithms and for a collection of procedures from the literature --Abstract, page ii
    corecore