22 research outputs found
A Tensor Analogy of Yuan's Theorem of the Alternative and Polynomial Optimization with Sign structure
Yuan's theorem of the alternative is an important theoretical tool in
optimization, which provides a checkable certificate for the infeasibility of a
strict inequality system involving two homogeneous quadratic functions. In this
paper, we provide a tractable extension of Yuan's theorem of the alternative to
the symmetric tensor setting. As an application, we establish that the optimal
value of a class of nonconvex polynomial optimization problems with suitable
sign structure (or more explicitly, with essentially non-positive coefficients)
can be computed by a related convex conic programming problem, and the optimal
solution of these nonconvex polynomial optimization problems can be recovered
from the corresponding solution of the convex conic programming problem.
Moreover, we obtain that this class of nonconvex polynomial optimization
problems enjoy exact sum-of-squares relaxation, and so, can be solved via a
single semidefinite programming problem.Comment: acceted by Journal of Optimization Theory and its application, UNSW
preprint, 22 page
On convergence of the maximum block improvement method
Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle block-structured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes second-order assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rank-one tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method
On orthogonal tensors and best rank-one approximation ratio
As is well known, the smallest possible ratio between the spectral norm and
the Frobenius norm of an matrix with is and
is (up to scalar scaling) attained only by matrices having pairwise orthonormal
rows. In the present paper, the smallest possible ratio between spectral and
Frobenius norms of tensors of order , also
called the best rank-one approximation ratio in the literature, is
investigated. The exact value is not known for most configurations of . Using a natural definition of orthogonal tensors over the real
field (resp., unitary tensors over the complex field), it is shown that the
obvious lower bound is attained if and only if a
tensor is orthogonal (resp., unitary) up to scaling. Whether or not orthogonal
or unitary tensors exist depends on the dimensions and the
field. A connection between the (non)existence of real orthogonal tensors of
order three and the classical Hurwitz problem on composition algebras can be
established: existence of orthogonal tensors of size
is equivalent to the admissibility of the triple to the Hurwitz
problem. Some implications for higher-order tensors are then given. For
instance, real orthogonal tensors of order
do exist, but only when . In the complex case, the situation is
more drastic: unitary tensors of size with exist only when . Finally, some numerical illustrations
for spectral norm computation are presented
Approximation Algorithms for Sparse Best Rank-1 Approximation to Higher-Order Tensors
Sparse tensor best rank-1 approximation (BR1Approx), which is a sparsity
generalization of the dense tensor BR1Approx, and is a higher-order extension
of the sparse matrix BR1Approx, is one of the most important problems in sparse
tensor decomposition and related problems arising from statistics and machine
learning. By exploiting the multilinearity as well as the sparsity structure of
the problem, four approximation algorithms are proposed, which are easily
implemented, of low computational complexity, and can serve as initial
procedures for iterative algorithms. In addition, theoretically guaranteed
worst-case approximation lower bounds are proved for all the algorithms. We
provide numerical experiments on synthetic and real data to illustrate the
effectiveness of the proposed algorithms