1,101 research outputs found
A Bramble-Pasciak-like method with applications in optimization
Saddle-point systems arise in many applications areas, in fact in any situation where an extremum principle arises with constraints. The Stokes problem describing slow viscous flow of an incompressible fluid is a classic example coming from partial differential equations and in the area of Optimization such problems are ubiquitous.\ud
In this manuscript we show how new approaches for the solution of saddle-point systems arising in Optimization can be derived from the Bramble-Pasciak Conjugate Gradient approach widely used in PDEs and more recent generalizations thereof. In particular we derive a class of new solution methods based on the use of Preconditioned Conjugate Gradients in non-standard inner products and demonstrate how these can be understood through more standard machinery. We show connections to Constraint Preconditioning and give the results of numerical computations on a number of standard Optimization test examples
A weakly stable algorithm for general Toeplitz systems
We show that a fast algorithm for the QR factorization of a Toeplitz or
Hankel matrix A is weakly stable in the sense that R^T.R is close to A^T.A.
Thus, when the algorithm is used to solve the semi-normal equations R^T.Rx =
A^Tb, we obtain a weakly stable method for the solution of a nonsingular
Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the
solution of the full-rank Toeplitz or Hankel least squares problem.Comment: 17 pages. An old Technical Report with postscript added. For further
details, see http://wwwmaths.anu.edu.au/~brent/pub/pub143.htm
Fast model-fitting of Bayesian variable selection regression using the iterative complex factorization algorithm
Bayesian variable selection regression (BVSR) is able to jointly analyze
genome-wide genetic datasets, but the slow computation via Markov chain Monte
Carlo (MCMC) hampered its wide-spread usage. Here we present a novel iterative
method to solve a special class of linear systems, which can increase the speed
of the BVSR model-fitting tenfold. The iterative method hinges on the complex
factorization of the sum of two matrices and the solution path resides in the
complex domain (instead of the real domain). Compared to the Gauss-Seidel
method, the complex factorization converges almost instantaneously and its
error is several magnitude smaller than that of the Gauss-Seidel method. More
importantly, the error is always within the pre-specified precision while the
Gauss-Seidel method is not. For large problems with thousands of covariates,
the complex factorization is 10 -- 100 times faster than either the
Gauss-Seidel method or the direct method via the Cholesky decomposition. In
BVSR, one needs to repetitively solve large penalized regression systems whose
design matrices only change slightly between adjacent MCMC steps. This slight
change in design matrix enables the adaptation of the iterative complex
factorization method. The computational innovation will facilitate the
wide-spread use of BVSR in reanalyzing genome-wide association datasets.Comment: Accepted versio
Prior-preconditioned conjugate gradient method for accelerated Gibbs sampling in "large & large " sparse Bayesian regression
In a modern observational study based on healthcare databases, the number of
observations and of predictors typically range in the order of ~
and of ~ . Despite the large sample size, data rarely provide
sufficient information to reliably estimate such a large number of parameters.
Sparse regression techniques provide potential solutions, one notable approach
being the Bayesian methods based on shrinkage priors. In the "large & large
" setting, however, posterior computation encounters a major bottleneck at
repeated sampling from a high-dimensional Gaussian distribution, whose
precision matrix is expensive to compute and factorize. In this article,
we present a novel algorithm to speed up this bottleneck based on the following
observation: we can cheaply generate a random vector such that the solution
to the linear system has the desired Gaussian distribution. We
can then solve the linear system by the conjugate gradient (CG) algorithm
through matrix-vector multiplications by , without ever explicitly
inverting . Rapid convergence of CG in this specific context is achieved
by the theory of prior-preconditioning we develop. We apply our algorithm to a
clinically relevant large-scale observational study with = 72,489 patients
and = 22,175 clinical covariates, designed to assess the relative risk of
adverse events from two alternative blood anti-coagulants. Our algorithm
demonstrates an order of magnitude speed-up in the posterior computation.Comment: 32 pages, 7 figures + Supplement (23 pages, 7 figures
Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections
This work focuses on the iterative solution of sequences of KKT linear
systems arising in interior point methods applied to large convex quadratic
programming problems. This task is the computational core of the interior point
procedure and an efficient preconditioning strategy is crucial for the
efficiency of the overall method. Constraint preconditioners are very effective
in this context; nevertheless, their computation may be very expensive for
large-scale problems, and resorting to approximations of them may be
convenient. Here we propose a procedure for building inexact constraint
preconditioners by updating a "seed" constraint preconditioner computed for a
KKT matrix at a previous interior point iteration. These updates are obtained
through low-rank corrections of the Schur complement of the (1,1) block of the
seed preconditioner. The updated preconditioners are analyzed both
theoretically and computationally. The results obtained show that our updating
procedure, coupled with an adaptive strategy for determining whether to
reinitialize or update the preconditioner, can enhance the performance of
interior point methods on large problems.Comment: 22 page
- …