349 research outputs found

    Solving Systems of Non-Linear Equations by Broyden's Method with Projected Updates

    Get PDF
    We introduce a modification of Broyden's method for finding a zero of n nonlinear equations in n unknowns when analytic derivatives are not available. The method retains the local Q-superlinear convergence of Broyden's method and has the additional property that if any or all of the equations are linear, it locates a zero of these equations in n+1 or fewer iterations. Limited computational experience suggests that our modification often improves upon Eroyden's method.

    A Modified Levenberg-Marquardt Method for the Bidirectional Relay Channel

    Full text link
    This paper presents an optimization approach for a system consisting of multiple bidirectional links over a two-way amplify-and-forward relay. It is desired to improve the fairness of the system. All user pairs exchange information over one relay station with multiple antennas. Due to the joint transmission to all users, the users are subject to mutual interference. A mitigation of the interference can be achieved by max-min fair precoding optimization where the relay is subject to a sum power constraint. The resulting optimization problem is non-convex. This paper proposes a novel iterative and low complexity approach based on a modified Levenberg-Marquardt method to find near optimal solutions. The presented method finds solutions close to the standard convex-solver based relaxation approach.Comment: submitted to IEEE Transactions on Vehicular Technology We corrected small mistakes in the proof of Lemma 2 and Proposition

    A geometric Newton method for Oja's vector field

    Full text link
    Newton's method for solving the matrix equation F(X)≡AX−XXTAX=0F(X)\equiv AX-XX^TAX=0 runs up against the fact that its zeros are not isolated. This is due to a symmetry of FF by the action of the orthogonal group. We show how differential-geometric techniques can be exploited to remove this symmetry and obtain a ``geometric'' Newton algorithm that finds the zeros of FF. The geometric Newton method does not suffer from the degeneracy issue that stands in the way of the original Newton method

    The Lack of Positive Definiteness in the Hessian in Constrained Optimization

    Get PDF
    The use of the DFP or the BFGS secant updates requires the Hessian at the solution to be positive definite. The second order sufficiency conditions insure the positive definiteness only in a subspace of R^n. Conditions are given so we can safely update with either update. A new class of algorithms is proposed which generate a sequence {xk} converging 2-step q-superlinearly. We also propose two specific algorithms: One converges q-superlinearly if the Hessian is positive definite in R^n and converges 2-step q-superlinearly if the Hessian is positive definite only in a subspace; the second one generates a sequence converging 1-step q-superlinearly. While the former costs one extra gradient evaluation, the latter costs one extra gradient evaluation and one extra function evaluation on the constraints

    The convergence of quasi-Gauss-Newton methods for nonlinear problems

    Get PDF
    AbstractQuasi-Gauss-Newton methods for nonlinear equations are investigated. A Quasi-Gauss-Newton method is proposed. In this method, the Jacobian is modified by a convex combination of Broyden's update and a weighted update. The convergence of the method described by Wang and Tewarson in [1] and the proposed method is proved. Computational evidence is given in support of the relative efficiency of the proposed method

    A second derivative SQP method: local convergence

    Get PDF
    In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ℓ1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud \ud Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud \ud Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set

    On the Local and Global Convergence of a Reduced Quasi-Newton Method

    Get PDF
    In optimization in R^n with m nonlinear equality constraints, we study the local convergence of reduced quasi-Newton methods, in which the updated matrix is of order n-m. In particular, we give necessary and sufficient conditions for q-superlinear convergence (in one step). We introduce a device to globalize the local algorithm which consists in determining a step on an arc in order to decrease an exact penalty function. We give conditions so that asymptotically the step will be equal to one

    Local convergence of the Levenberg-Marquardt method under H\"{o}lder metric subregularity

    Get PDF
    We describe and analyse Levenberg-Marquardt methods for solving systems of nonlinear equations. More specifically, we propose an adaptive formula for the Levenberg-Marquardt parameter and analyse the local convergence of the method under H\"{o}lder metric subregularity of the function defining the equation and H\"older continuity of its gradient mapping. Further, we analyse the local convergence of the method under the additional assumption that the \L{}ojasiewicz gradient inequality holds. We finally report encouraging numerical results confirming the theoretical findings for the problem of computing moiety conserved steady states in biochemical reaction networks. This problem can be cast as finding a solution of a system of nonlinear equations, where the associated mapping satisfies the \L{}ojasiewicz gradient inequality assumption.Comment: 30 pages, 10 figure
    • …
    corecore