4 research outputs found

    Convergence property of the Iri-Imai algorithm for some smooth convex programming problems

    Get PDF
    In this paper, the Iri-Imai algorithm for solving linear and convex quadratic programming is extended to solve some other smooth convex programming problems. The globally linear convergence rate of this extended algorithm is proved, under the condition that the objective and constraint functions satisfy a certain type of convexity, called the harmonic convexity in this paper. A characterization of this convexity condition is given. The same convexity condition was used by Mehrotra and Sun to prove the convergence of a path-following algorithm. The Iri-Imai algorithm is a natural generalization of the original Newton algorithm to constrained convex programming. Other known convergent interior-point algorithms for smooth convex programming are mainly based on the path-following approach

    On Polynomial-time Path-following Interior-point Methods with Local Superlinear Convergence

    Get PDF
    Interior-point methods provide one of the most popular ways of solving convex optimization problems. Two advantages of modern interior-point methods over other approaches are: (1) robust global convergence, and (2) the ability to obtain high accuracy solutions in theory (and in practice, if the algorithms are properly implemented, and as long as numerical linear system solvers continue to provide high accuracy solutions) for well-posed problem instances. This second ability is typically demonstrated by asymptotic superlinear convergence properties. In this thesis, we study superlinear convergence properties of interior-point methods with proven polynomial iteration complexity. Our focus is on linear programming and semidefinite programming special cases. We provide a survey on polynomial iteration complexity interior-point methods which also achieve asymptotic superlinear convergence. We analyze the elements of superlinear convergence proofs for a dual interior-point algorithm of Nesterov and Tun\c{c}el and a primal-dual interior-point algorithm of Mizuno, Todd and Ye. We present the results of our computational experiments which observe and track superlinear convergence for a variant of Nesterov and Tun\c{c}el's algorithm

    A value estimation approach to Iri-Imai's method for constrained convex optimization.

    Get PDF
    Lam Sze Wan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.Includes bibliographical references (leaves 93-95).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Background --- p.4Chapter 3 --- Review of Iri-Imai Algorithm for Convex Programming Prob- lems --- p.10Chapter 3.1 --- Iri-Imai Algorithm for Convex Programming --- p.11Chapter 3.2 --- Numerical Results --- p.14Chapter 3.2.1 --- Linear Programming Problems --- p.15Chapter 3.2.2 --- Convex Quadratic Programming Problems with Linear Inequality Constraints --- p.17Chapter 3.2.3 --- Convex Quadratic Programming Problems with Con- vex Quadratic Inequality Constraints --- p.18Chapter 3.2.4 --- Summary of Numerical Results --- p.21Chapter 3.3 --- Chapter Summary --- p.22Chapter 4 --- Value Estimation Approach to Iri-Imai Method for Con- strained Optimization --- p.23Chapter 4.1 --- Value Estimation Function Method --- p.24Chapter 4.1.1 --- Formulation and Properties --- p.24Chapter 4.1.2 --- Value Estimation Approach to Iri-Imai Method --- p.33Chapter 4.2 --- "A New Smooth Multiplicative Barrier Function Φθ+,u" --- p.35Chapter 4.2.1 --- Formulation and Properties --- p.35Chapter 4.2.2 --- "Value Estimation Approach to Iri-Imai Method by Us- ing Φθ+,u" --- p.41Chapter 4.3 --- Convergence Analysis --- p.43Chapter 4.4 --- Numerical Results --- p.46Chapter 4.4.1 --- Numerical Results Based on Algorithm 4.1 --- p.46Chapter 4.4.2 --- Numerical Results Based on Algorithm 4.2 --- p.50Chapter 4.4.3 --- Summary of Numerical Results --- p.59Chapter 4.5 --- Chapter Summary --- p.60Chapter 5 --- Extension of Value Estimation Approach to Iri-Imai Method for More General Constrained Optimization --- p.61Chapter 5.1 --- Extension of Iri-Imai Algorithm 3.1 for More General Con- strained Optimization --- p.62Chapter 5.1.1 --- Formulation and Properties --- p.62Chapter 5.1.2 --- Extension of Iri-Imai Algorithm 3.1 --- p.63Chapter 5.2 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.1 for More General Constrained Optimization --- p.64Chapter 5.2.1 --- Formulation and Properties --- p.64Chapter 5.2.2 --- Value Estimation Approach to Iri-Imai Method --- p.67Chapter 5.3 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.2 for More General Constrained Optimization --- p.69Chapter 5.3.1 --- Formulation and Properties --- p.69Chapter 5.3.2 --- Value Estimation Approach to Iri-Imai Method --- p.71Chapter 5.4 --- Numerical Results --- p.72Chapter 5.4.1 --- Numerical Results Based on Algorithm 5.1 --- p.73Chapter 5.4.2 --- Numerical Results Based on Algorithm 5.2 --- p.76Chapter 5.4.3 --- Numerical Results Based on Algorithm 5.3 --- p.78Chapter 5.4.4 --- Summary of Numerical Results --- p.86Chapter 5.5 --- Chapter Summary --- p.87Chapter 6 --- Conclusion --- p.88Bibliography --- p.93Chapter A --- Search Directions --- p.96Chapter A.1 --- Newton's Method --- p.97Chapter A.1.1 --- Golden Section Method --- p.99Chapter A.2 --- Gradients and Hessian Matrices --- p.100Chapter A.2.1 --- Gradient of Φθ(x) --- p.100Chapter A.2.2 --- Hessian Matrix of Φθ(x) --- p.101Chapter A.2.3 --- Gradient of Φθ(x) --- p.101Chapter A.2.4 --- Hessian Matrix of φθ (x) --- p.102Chapter A.2.5 --- Gradient and Hessian Matrix of Φθ(x) in Terms of ∇xφθ (x) and∇2xxφθ (x) --- p.102Chapter A.2.6 --- "Gradient of φθ+,u(x)" --- p.102Chapter A.2.7 --- "Hessian Matrix of φθ+,u(x)" --- p.103Chapter A.2.8 --- "Gradient and Hessian Matrix of Φθ+,u(x) in Terms of ∇xφθ+,u(x)and ∇2xxφθ+,u(x)" --- p.103Chapter A.3 --- Newton's Directions --- p.103Chapter A.3.1 --- Newton Direction of Φθ (x) in Terms of ∇xφθ (x) and ∇2xxφθ(x) --- p.104Chapter A.3.2 --- "Newton Direction of Φθ+,u(x) in Terms of ∇xφθ+,u(x) and ∇2xxφθ,u(x)" --- p.104Chapter A.4 --- Feasible Descent Directions for the Minimization Problems (Pθ) and (Pθ+) --- p.105Chapter A.4.1 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ) --- p.105Chapter A.4.2 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ+) --- p.107Chapter B --- Randomly Generated Test Problems for Positive Definite Quadratic Programming --- p.109Chapter B.l --- Convex Quadratic Programming Problems with Linear Con- straints --- p.110Chapter B.l.1 --- General Description of Test Problems --- p.110Chapter B.l.2 --- The Objective Function --- p.112Chapter B.l.3 --- The Linear Constraints --- p.113Chapter B.2 --- Convex Quadratic Programming Problems with Quadratic In- equality Constraints --- p.116Chapter B.2.1 --- The Quadratic Constraints --- p.11
    corecore