162 research outputs found

    Numerical Algorithms for Polynomial Optimisation Problems with Applications

    Get PDF
    In this thesis, we study tensor eigenvalue problems and polynomial optimization problems. In particular, we present a fast algorithm for computing the spectral radii of symmetric nonnegative tensors without requiring the partition of the tensors. We also propose some polynomial time approximation algorithms with new approximation bounds for nonnegative polynomial optimization problems over unit spheres. Furthermore, we develop an efficient and effective algorithm for the maximum clique problem

    Finding the spectral radius of a nonnegative irreducible symmetric tensor via DC programming

    Full text link
    The Perron-Frobenius theorem says that the spectral radius of an irreducible nonnegative tensor is the unique positive eigenvalue corresponding to a positive eigenvector. With this in mind, the purpose of this paper is to find the spectral radius and its corresponding positive eigenvector of an irreducible nonnegative symmetric tensor. By transferring the eigenvalue problem into an equivalent problem of minimizing a concave function on a closed convex set, which is typically a DC (difference of convex functions) programming, we derive a simpler and cheaper iterative method. The proposed method is well-defined. Furthermore, we show that both sequences of the eigenvalue estimates and the eigenvector evaluations generated by the method QQ-linearly converge to the spectral radius and its corresponding eigenvector, respectively. To accelerate the method, we introduce a line search technique. The improved method retains the same convergence property as the original version. Preliminary numerical results show that the improved method performs quite well

    Computing the extremal nonnegative solutions of the M-tensor equation with a nonnegative right side vector

    Full text link
    We consider the tensor equation whose coefficient tensor is a nonsingular M-tensor and whose right side vector is nonnegative. Such a tensor equation may have a large number of nonnegative solutions. It is already known that the tensor equation has a maximal nonnegative solution and a minimal nonnegative solution (called extremal solutions collectively). However, the existing proofs do not show how the extremal solutions can be computed. The existing numerical methods can find one of the nonnegative solutions, without knowing whether the computed solution is an extremal solution. In this paper, we present new proofs for the existence of extremal solutions. Our proofs are much shorter than existing ones and more importantly they give numerical methods that can compute the extremal solutions. Linear convergence of these numerical methods is also proved under mild assumptions. Some of our discussions also allow the coefficient tensor to be a Z-tensor or allow the right side vector to have some negative elements

    Examples of Riemannian Manifolds with non-negative sectional curvature

    Full text link
    An updated version with a few corrections.Comment: 32 page

    Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs

    Full text link
    Laplacian mixture models identify overlapping regions of influence in unlabeled graph and network data in a scalable and computationally efficient way, yielding useful low-dimensional representations. By combining Laplacian eigenspace and finite mixture modeling methods, they provide probabilistic or fuzzy dimensionality reductions or domain decompositions for a variety of input data types, including mixture distributions, feature vectors, and graphs or networks. Provable optimal recovery using the algorithm is analytically shown for a nontrivial class of cluster graphs. Heuristic approximations for scalable high-performance implementations are described and empirically tested. Connections to PageRank and community detection in network analysis demonstrate the wide applicability of this approach. The origins of fuzzy spectral methods, beginning with generalized heat or diffusion equations in physics, are reviewed and summarized. Comparisons to other dimensionality reduction and clustering methods for challenging unsupervised machine learning problems are also discussed.Comment: 13 figures, 35 reference
    • …
    corecore