22 research outputs found

    Maximum block improvement and polynomial optimization

    Get PDF

    A Tensor Analogy of Yuan's Theorem of the Alternative and Polynomial Optimization with Sign structure

    Full text link
    Yuan's theorem of the alternative is an important theoretical tool in optimization, which provides a checkable certificate for the infeasibility of a strict inequality system involving two homogeneous quadratic functions. In this paper, we provide a tractable extension of Yuan's theorem of the alternative to the symmetric tensor setting. As an application, we establish that the optimal value of a class of nonconvex polynomial optimization problems with suitable sign structure (or more explicitly, with essentially non-positive coefficients) can be computed by a related convex conic programming problem, and the optimal solution of these nonconvex polynomial optimization problems can be recovered from the corresponding solution of the convex conic programming problem. Moreover, we obtain that this class of nonconvex polynomial optimization problems enjoy exact sum-of-squares relaxation, and so, can be solved via a single semidefinite programming problem.Comment: acceted by Journal of Optimization Theory and its application, UNSW preprint, 22 page

    On convergence of the maximum block improvement method

    Get PDF
    Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle block-structured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes second-order assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rank-one tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method

    On orthogonal tensors and best rank-one approximation ratio

    Full text link
    As is well known, the smallest possible ratio between the spectral norm and the Frobenius norm of an m×nm \times n matrix with mnm \le n is 1/m1/\sqrt{m} and is (up to scalar scaling) attained only by matrices having pairwise orthonormal rows. In the present paper, the smallest possible ratio between spectral and Frobenius norms of n1××ndn_1 \times \dots \times n_d tensors of order dd, also called the best rank-one approximation ratio in the literature, is investigated. The exact value is not known for most configurations of n1ndn_1 \le \dots \le n_d. Using a natural definition of orthogonal tensors over the real field (resp., unitary tensors over the complex field), it is shown that the obvious lower bound 1/n1nd11/\sqrt{n_1 \cdots n_{d-1}} is attained if and only if a tensor is orthogonal (resp., unitary) up to scaling. Whether or not orthogonal or unitary tensors exist depends on the dimensions n1,,ndn_1,\dots,n_d and the field. A connection between the (non)existence of real orthogonal tensors of order three and the classical Hurwitz problem on composition algebras can be established: existence of orthogonal tensors of size ×m×n\ell \times m \times n is equivalent to the admissibility of the triple [,m,n][\ell,m,n] to the Hurwitz problem. Some implications for higher-order tensors are then given. For instance, real orthogonal n××nn \times \dots \times n tensors of order d3d \ge 3 do exist, but only when n=1,2,4,8n = 1,2,4,8. In the complex case, the situation is more drastic: unitary tensors of size ×m×n\ell \times m \times n with mn\ell \le m \le n exist only when mn\ell m \le n. Finally, some numerical illustrations for spectral norm computation are presented

    On the tensor spectral <i>p</i>-norm and its dual norm via partitions

    Get PDF

    Approximation Algorithms for Sparse Best Rank-1 Approximation to Higher-Order Tensors

    Full text link
    Sparse tensor best rank-1 approximation (BR1Approx), which is a sparsity generalization of the dense tensor BR1Approx, and is a higher-order extension of the sparse matrix BR1Approx, is one of the most important problems in sparse tensor decomposition and related problems arising from statistics and machine learning. By exploiting the multilinearity as well as the sparsity structure of the problem, four approximation algorithms are proposed, which are easily implemented, of low computational complexity, and can serve as initial procedures for iterative algorithms. In addition, theoretically guaranteed worst-case approximation lower bounds are proved for all the algorithms. We provide numerical experiments on synthetic and real data to illustrate the effectiveness of the proposed algorithms
    corecore