858 research outputs found
The Matrix Ridge Approximation: Algorithms and Applications
We are concerned with an approximation problem for a symmetric positive
semidefinite matrix due to motivation from a class of nonlinear machine
learning methods. We discuss an approximation approach that we call {matrix
ridge approximation}. In particular, we define the matrix ridge approximation
as an incomplete matrix factorization plus a ridge term. Moreover, we present
probabilistic interpretations using a normal latent variable model and a
Wishart model for this approximation approach. The idea behind the latent
variable model in turn leads us to an efficient EM iterative method for
handling the matrix ridge approximation problem. Finally, we illustrate the
applications of the approximation approach in multivariate data analysis.
Empirical studies in spectral clustering and Gaussian process regression show
that the matrix ridge approximation with the EM iteration is potentially
useful
Fast iterative solvers for convection-diffusion control problems
In this manuscript, we describe effective solvers for the optimal control of stabilized convection-diffusion problems. We employ the local projection stabilization, which we show to give the same matrix system whether the discretize-then-optimize or optimize-then-discretize approach for this problem is used. We then derive two effective preconditioners for this problem, the �first to be used with MINRES and the second to be used with the Bramble-Pasciak Conjugate Gradient method. The key components of both preconditioners are an accurate mass matrix approximation, a good approximation of the Schur complement, and an appropriate multigrid process to enact this latter approximation. We present numerical results to demonstrate that these preconditioners result in convergence in a small number of iterations, which is robust with respect to the mesh size h, and the regularization parameter β, for a range of problems
LSMR: An iterative algorithm for sparse least-squares problems
An iterative method LSMR is presented for solving linear systems and
least-squares problem \min \norm{Ax-b}_2, with being sparse or a fast
linear operator. LSMR is based on the Golub-Kahan bidiagonalization process. It
is analytically equivalent to the MINRES method applied to the normal equation
A\T Ax = A\T b, so that the quantities \norm{A\T r_k} are monotonically
decreasing (where is the residual for the current iterate
). In practice we observe that \norm{r_k} also decreases monotonically.
Compared to LSQR, for which only \norm{r_k} is monotonic, it is safer to
terminate LSMR early. Improvements for the new iterative method in the presence
of extra available memory are also explored.Comment: 21 page
Exact Reconstruction Conditions for Regularized Modified Basis Pursuit
In this correspondence, we obtain exact recovery conditions for regularized
modified basis pursuit (reg-mod-BP) and discuss when the obtained conditions
are weaker than those for modified-CS or for basis pursuit (BP). The discussion
is also supported by simulation comparisons. Reg-mod-BP provides a solution to
the sparse recovery problem when both an erroneous estimate of the signal's
support, denoted by , and an erroneous estimate of the signal values on
are available.Comment: 17 page
- …