790 research outputs found

    Convergence property of the Iri-Imai algorithm for some smooth convex programming problems

    Get PDF
    In this paper, the Iri-Imai algorithm for solving linear and convex quadratic programming is extended to solve some other smooth convex programming problems. The globally linear convergence rate of this extended algorithm is proved, under the condition that the objective and constraint functions satisfy a certain type of convexity, called the harmonic convexity in this paper. A characterization of this convexity condition is given. The same convexity condition was used by Mehrotra and Sun to prove the convergence of a path-following algorithm. The Iri-Imai algorithm is a natural generalization of the original Newton algorithm to constrained convex programming. Other known convergent interior-point algorithms for smooth convex programming are mainly based on the path-following approach

    Speeding up Simplification of Polygonal Curves using Nested Approximations

    Full text link
    We develop a multiresolution approach to the problem of polygonal curve approximation. We show theoretically and experimentally that, if the simplification algorithm A used between any two successive levels of resolution satisfies some conditions, the multiresolution algorithm MR will have a complexity lower than the complexity of A. In particular, we show that if A has a O(N2/K) complexity (the complexity of a reduced search dynamic solution approach), where N and K are respectively the initial and the final number of segments, the complexity of MR is in O(N).We experimentally compare the outcomes of MR with those of the optimal "full search" dynamic programming solution and of classical merge and split approaches. The experimental evaluations confirm the theoretical derivations and show that the proposed approach evaluated on 2D coastal maps either shows a lower complexity or provides polygonal approximations closer to the initial curves.Comment: 12 pages + figure

    matching, interpolation, and approximation ; a survey

    Get PDF
    In this survey we consider geometric techniques which have been used to measure the similarity or distance between shapes, as well as to approximate shapes, or interpolate between shapes. Shape is a modality which plays a key role in many disciplines, ranging from computer vision to molecular biology. We focus on algorithmic techniques based on computational geometry that have been developed for shape matching, simplification, and morphing

    A value estimation approach to Iri-Imai's method for constrained convex optimization.

    Get PDF
    Lam Sze Wan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.Includes bibliographical references (leaves 93-95).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Background --- p.4Chapter 3 --- Review of Iri-Imai Algorithm for Convex Programming Prob- lems --- p.10Chapter 3.1 --- Iri-Imai Algorithm for Convex Programming --- p.11Chapter 3.2 --- Numerical Results --- p.14Chapter 3.2.1 --- Linear Programming Problems --- p.15Chapter 3.2.2 --- Convex Quadratic Programming Problems with Linear Inequality Constraints --- p.17Chapter 3.2.3 --- Convex Quadratic Programming Problems with Con- vex Quadratic Inequality Constraints --- p.18Chapter 3.2.4 --- Summary of Numerical Results --- p.21Chapter 3.3 --- Chapter Summary --- p.22Chapter 4 --- Value Estimation Approach to Iri-Imai Method for Con- strained Optimization --- p.23Chapter 4.1 --- Value Estimation Function Method --- p.24Chapter 4.1.1 --- Formulation and Properties --- p.24Chapter 4.1.2 --- Value Estimation Approach to Iri-Imai Method --- p.33Chapter 4.2 --- "A New Smooth Multiplicative Barrier Function Φθ+,u" --- p.35Chapter 4.2.1 --- Formulation and Properties --- p.35Chapter 4.2.2 --- "Value Estimation Approach to Iri-Imai Method by Us- ing Φθ+,u" --- p.41Chapter 4.3 --- Convergence Analysis --- p.43Chapter 4.4 --- Numerical Results --- p.46Chapter 4.4.1 --- Numerical Results Based on Algorithm 4.1 --- p.46Chapter 4.4.2 --- Numerical Results Based on Algorithm 4.2 --- p.50Chapter 4.4.3 --- Summary of Numerical Results --- p.59Chapter 4.5 --- Chapter Summary --- p.60Chapter 5 --- Extension of Value Estimation Approach to Iri-Imai Method for More General Constrained Optimization --- p.61Chapter 5.1 --- Extension of Iri-Imai Algorithm 3.1 for More General Con- strained Optimization --- p.62Chapter 5.1.1 --- Formulation and Properties --- p.62Chapter 5.1.2 --- Extension of Iri-Imai Algorithm 3.1 --- p.63Chapter 5.2 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.1 for More General Constrained Optimization --- p.64Chapter 5.2.1 --- Formulation and Properties --- p.64Chapter 5.2.2 --- Value Estimation Approach to Iri-Imai Method --- p.67Chapter 5.3 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.2 for More General Constrained Optimization --- p.69Chapter 5.3.1 --- Formulation and Properties --- p.69Chapter 5.3.2 --- Value Estimation Approach to Iri-Imai Method --- p.71Chapter 5.4 --- Numerical Results --- p.72Chapter 5.4.1 --- Numerical Results Based on Algorithm 5.1 --- p.73Chapter 5.4.2 --- Numerical Results Based on Algorithm 5.2 --- p.76Chapter 5.4.3 --- Numerical Results Based on Algorithm 5.3 --- p.78Chapter 5.4.4 --- Summary of Numerical Results --- p.86Chapter 5.5 --- Chapter Summary --- p.87Chapter 6 --- Conclusion --- p.88Bibliography --- p.93Chapter A --- Search Directions --- p.96Chapter A.1 --- Newton's Method --- p.97Chapter A.1.1 --- Golden Section Method --- p.99Chapter A.2 --- Gradients and Hessian Matrices --- p.100Chapter A.2.1 --- Gradient of Φθ(x) --- p.100Chapter A.2.2 --- Hessian Matrix of Φθ(x) --- p.101Chapter A.2.3 --- Gradient of Φθ(x) --- p.101Chapter A.2.4 --- Hessian Matrix of φθ (x) --- p.102Chapter A.2.5 --- Gradient and Hessian Matrix of Φθ(x) in Terms of ∇xφθ (x) and∇2xxφθ (x) --- p.102Chapter A.2.6 --- "Gradient of φθ+,u(x)" --- p.102Chapter A.2.7 --- "Hessian Matrix of φθ+,u(x)" --- p.103Chapter A.2.8 --- "Gradient and Hessian Matrix of Φθ+,u(x) in Terms of ∇xφθ+,u(x)and ∇2xxφθ+,u(x)" --- p.103Chapter A.3 --- Newton's Directions --- p.103Chapter A.3.1 --- Newton Direction of Φθ (x) in Terms of ∇xφθ (x) and ∇2xxφθ(x) --- p.104Chapter A.3.2 --- "Newton Direction of Φθ+,u(x) in Terms of ∇xφθ+,u(x) and ∇2xxφθ,u(x)" --- p.104Chapter A.4 --- Feasible Descent Directions for the Minimization Problems (Pθ) and (Pθ+) --- p.105Chapter A.4.1 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ) --- p.105Chapter A.4.2 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ+) --- p.107Chapter B --- Randomly Generated Test Problems for Positive Definite Quadratic Programming --- p.109Chapter B.l --- Convex Quadratic Programming Problems with Linear Con- straints --- p.110Chapter B.l.1 --- General Description of Test Problems --- p.110Chapter B.l.2 --- The Objective Function --- p.112Chapter B.l.3 --- The Linear Constraints --- p.113Chapter B.2 --- Convex Quadratic Programming Problems with Quadratic In- equality Constraints --- p.116Chapter B.2.1 --- The Quadratic Constraints --- p.11

    Sharp feature identification in a polygon

    Full text link
    This thesis presents an efficient algorithm for recognizing and extracting sharp-features from polygonal shapes. As used here, a sharp-feature is a distinct portion of a polygon that is long and skinny. The algorithm executes in O(n^2) time, where n is the number of vertices in the polygon. Experimental results from a Java implementation of the algorithm are also presented

    Learning from Experience, Simply

    Get PDF
    There is substantial academic interest in modeling consumer experiential learning. However, (approximately) optimal solutions to forward-looking experiential learning problems are complex, limiting their behavioral plausibility and empirical feasibility. We propose that consumers use cognitively simple heuristic strategies. We explore one viable heuristic—index strategies—and demonstrate that they are intuitive, tractable, and plausible. Index strategies are much simpler for consumers to use but provide close-to-optimal utility. They also avoid exponential growth in computational complexity, enabling researchers to study learning models in more complex situations. Well-defined index strategies depend on a structural property called indexability. We prove the indexability of a canonical forward-looking experiential learning model in which consumers learn brand quality while facing random utility shocks. Following an index strategy, consumers develop an index for each brand separately and choose the brand with the highest index. Using synthetic data, we demonstrate that an index strategy achieves nearly optimal utility at substantially lower computational costs. Using IRI data for diapers, we find that an index strategy performs as well as an approximately optimal solution and better than myopic learning. We extend the analysis to incorporate risk aversion, other cognitively simple heuristics, heterogeneous foresight, and an alternative specification of brands
    • …
    corecore