17,690 research outputs found
On convergence of the maximum block improvement method
Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle block-structured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes second-order assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rank-one tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method
Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure
The numerical solution of partial differential equations on high-dimensional
domains gives rise to computationally challenging linear systems. When using
standard discretization techniques, the size of the linear system grows
exponentially with the number of dimensions, making the use of classic
iterative solvers infeasible. During the last few years, low-rank tensor
approaches have been developed that allow to mitigate this curse of
dimensionality by exploiting the underlying structure of the linear operator.
In this work, we focus on tensors represented in the Tucker and tensor train
formats. We propose two preconditioned gradient methods on the corresponding
low-rank tensor manifolds: A Riemannian version of the preconditioned
Richardson method as well as an approximate Newton scheme based on the
Riemannian Hessian. For the latter, considerable attention is given to the
efficient solution of the resulting Newton equation. In numerical experiments,
we compare the efficiency of our Riemannian algorithms with other established
tensor-based approaches such as a truncated preconditioned Richardson method
and the alternating linear scheme. The results show that our approximate
Riemannian Newton scheme is significantly faster in cases when the application
of the linear operator is expensive.Comment: 24 pages, 8 figure
- …