34,586 research outputs found

    A Semismooth Newton Method for Tensor Eigenvalue Complementarity Problem

    Full text link
    In this paper, we consider the tensor eigenvalue complementarity problem which is closely related to the optimality conditions for polynomial optimization, as well as a class of differential inclusions with nonconvex processes. By introducing an NCP-function, we reformulate the tensor eigenvalue complementarity problem as a system of nonlinear equations. We show that this function is strongly semismooth but not differentiable, in which case the classical smoothing methods cannot apply. Furthermore, we propose a damped semismooth Newton method for tensor eigenvalue complementarity problem. A new procedure to evaluate an element of the generalized Jocobian is given, which turns out to be an element of the B-subdifferential under mild assumptions. As a result, the convergence of the damped semismooth Newton method is guaranteed by existing results. The numerical experiments also show that our method is efficient and promising

    Generalized Newton's Method based on Graphical Derivatives

    Get PDF
    This paper concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness assumption, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and BB-differentiable versions of Newton's method for nonsmooth Lipschitzian equations

    Standard error estimation for EM applications related to Latent class models

    Get PDF
    The EM algorithm is a popular method for computing maximum likelihood estimates. It tends to be numerically stable, reduces execution time compared to other estimation procedures and is easy to implement in latent class models. However, the EM algorithm fails to provide a consistent estimator of the standard errors of maximum likelihood estimates in incomplete data applications. Correct standard errors can be obtained by numerical differentiation. The technique requires computation of a complete-data gradient vector and Hessian matrix, but not those associated with the incomplete data likelihood. Obtaining first and second derivatives numerically is computationally very intensive and execution time may become very expensive when fitting Latent class models using a Newton-type algorithm. When the execution time is too high one is motivated to use the EM algorithm solution to initialize the Newton Raphson algorithm. We also investigate the effect on the execution time when a final Newton-Raphson step follows the EM algorithm after convergence. In this paper we compare the standard errors provided by the EM and Newton-Raphson algorithms for two models and analyze how this bias is affected by the number of parameters in the model fit.peer-reviewe
    corecore