46,798 research outputs found
Computing the Best Approximation Over the Intersection of a Polyhedral Set and the Doubly Nonnegative Cone
This paper introduces an efficient algorithm for computing the best
approximation of a given matrix onto the intersection of linear equalities,
inequalities and the doubly nonnegative cone (the cone of all positive
semidefinite matrices whose elements are nonnegative). In contrast to directly
applying the block coordinate descent type methods, we propose an inexact
accelerated (two-)block coordinate descent algorithm to tackle the four-block
unconstrained nonsmooth dual program. The proposed algorithm hinges on the
efficient semismooth Newton method to solve the subproblems, which have no
closed form solutions since the original four blocks are merged into two larger
blocks. The iteration complexity of the proposed algorithm is
established. Extensive numerical results over various large scale semidefinite
programming instances from relaxations of combinatorial problems demonstrate
the effectiveness of the proposed algorithm
An overview of block Gram-Schmidt methods and their stability properties
Block Gram-Schmidt algorithms serve as essential kernels in many scientific
computing applications, but for many commonly used variants, a rigorous
treatment of their stability properties remains open. This survey provides a
comprehensive categorization of block Gram-Schmidt algorithms, particularly
those used in Krylov subspace methods to build orthonormal bases one block
vector at a time. All known stability results are assembled, and new results
are summarized or conjectured for important communication-reducing variants.
Additionally, new block versions of low-synchronization variants are derived,
and their efficacy and stability are demonstrated for a wide range of
challenging examples. Low-synchronization variants appear remarkably stable for
s-step-like matrices built with Newton polynomials, pointing towards a new
stable and efficient backbone for Krylov subspace methods. Numerical examples
are computed with a versatile MATLAB package hosted at
https://github.com/katlund/BlockStab, and scripts for reproducing all results
in the paper are provided. Block Gram-Schmidt implementations in popular
software packages are discussed, along with a number of open problems. An
appendix containing all algorithms type-set in a uniform fashion is provided.Comment: 42 pages, 5 tables, 17 figures, 20 algorithm
Let's Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence
Block coordinate descent (BCD) methods are widely-used for large-scale
numerical optimization because of their cheap iteration costs, low memory
requirements, amenability to parallelization, and ability to exploit problem
structure. Three main algorithmic choices influence the performance of BCD
methods: the block partitioning strategy, the block selection rule, and the
block update rule. In this paper we explore all three of these building blocks
and propose variations for each that can lead to significantly faster BCD
methods. We (i) propose new greedy block-selection strategies that guarantee
more progress per iteration than the Gauss-Southwell rule; (ii) explore
practical issues like how to implement the new rules when using "variable"
blocks; (iii) explore the use of message-passing to compute matrix or Newton
updates efficiently on huge blocks for problems with a sparse dependency
between variables; and (iv) consider optimal active manifold identification,
which leads to bounds on the "active set complexity" of BCD methods and leads
to superlinear convergence for certain problems with sparse solutions (and in
some cases finite termination at an optimal solution). We support all of our
findings with numerical results for the classic machine learning problems of
least squares, logistic regression, multi-class logistic regression, label
propagation, and L1-regularization
A nonlinear vehicle-structure interaction methodology with wheel-rail detachment and reattachment
. A vehicle-structure interaction methodology with a nonlinear contact formulation
based on contact and target elements has been developed. To solve the dynamic equations of
motion, an incremental formulation has been used due to the nonlinear nature of the contact
mechanics, while a procedure based on the Lagrange multiplier method imposes the contact
constraint equations when contact occurs. The system of nonlinear equations is solved by an
efficient block factorization solver that reorders the system matrix and isolates the nonlinear
terms that belong to the contact elements or to other nonlinear elements that may be incorporated
in the model. Such procedure avoids multiple unnecessary factorizations of the linear
terms during each Newton iteration, making the formulation efficient and computationally
attractive. A numerical example has been carried out to validate the accuracy and efficiency
of the present methodology. The obtained results have shown a good agreement with the results
obtained with the commercial finite element software ANSY
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
We introduce a framework for quasi-Newton forward--backward splitting
algorithms (proximal quasi-Newton methods) with a metric induced by diagonal
rank- symmetric positive definite matrices. This special type of
metric allows for a highly efficient evaluation of the proximal mapping. The
key to this efficiency is a general proximal calculus in the new metric. By
using duality, formulas are derived that relate the proximal mapping in a
rank- modified metric to the original metric. We also describe efficient
implementations of the proximity calculation for a large class of functions;
the implementations exploit the piece-wise linear nature of the dual problem.
Then, we apply these results to acceleration of composite convex minimization
problems, which leads to elegant quasi-Newton methods for which we prove
convergence. The algorithm is tested on several numerical examples and compared
to a comprehensive list of alternatives in the literature. Our quasi-Newton
splitting algorithm with the prescribed metric compares favorably against
state-of-the-art. The algorithm has extensive applications including signal
processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
- …