1,477 research outputs found
Notes on a 3-term Conjugacy Recurrence for the Iterative Solution of Symmetric Linear Systems
We consider a 3-term recurrence, namely CG_2step, for the iterative solution of symmetric linear systems. The new algorithm generates conjugate directions and extends some standard theoretical properties of the Conjugate Gradient (CG) method [10]. We prove the finite convergence of CG_2step, and we provide some error analysis. Then, we introduce preconditioning for CG_2step, and we prove that standard error bounds for the CG also hold for our proposal.Iterative methods, 3-term recurrences, Conjugate Gradient method, Error Analysis, Preconditioning
On Algorithms Based on Joint Estimation of Currents and Contrast in Microwave Tomography
This paper deals with improvements to the contrast source inversion method
which is widely used in microwave tomography. First, the method is reviewed and
weaknesses of both the criterion form and the optimization strategy are
underlined. Then, two new algorithms are proposed. Both of them are based on
the same criterion, similar but more robust than the one used in contrast
source inversion. The first technique keeps the main characteristics of the
contrast source inversion optimization scheme but is based on a better
exploitation of the conjugate gradient algorithm. The second technique is based
on a preconditioned conjugate gradient algorithm and performs simultaneous
updates of sets of unknowns that are normally processed sequentially. Both
techniques are shown to be more efficient than original contrast source
inversion.Comment: 12 pages, 12 figures, 5 table
Patterns of Scalable Bayesian Inference
Datasets are growing not just in size but in complexity, creating a demand
for rich models and quantification of uncertainty. Bayesian methods are an
excellent fit for this demand, but scaling Bayesian inference is a challenge.
In response to this challenge, there has been considerable recent work based on
varying assumptions about model structure, underlying computational resources,
and the importance of asymptotic correctness. As a result, there is a zoo of
ideas with few clear overarching principles.
In this paper, we seek to identify unifying principles, patterns, and
intuitions for scaling Bayesian inference. We review existing work on utilizing
modern computing resources with both MCMC and variational approximation
techniques. From this taxonomy of ideas, we characterize the general principles
that have proven successful for designing scalable inference procedures and
comment on the path forward
A dai-liao hybrid hestenes-stiefel and fletcher-revees methods for unconstrained optimization
Some problems have no analytical solution or too difficult to solve by scientists, engineers, and mathematicians, so the development of numerical methods to obtain approximate solutions became necessary. Gradient methods are more efficient when the function to be minimized continuously in its first derivative. Therefore, this article presents a new hybrid Conjugate Gradient (CG) method to solve unconstrained optimization problems. The method requires the first-order derivatives but overcomes the steepest descent method’s shortcoming of slow convergence and needs not to save or compute the second-order derivatives needed by the Newton method. The CG update parameter is suggested from the Dai-Liao conjugacy condition as a convex combination of Hestenes-Stiefel and Fletcher-Revees algorithms by employing an optimal modulating choice parameterto avoid matrix storage. Numerical computation adopts an inexact line search to obtain the step-size that generates a decent property, showing that the algorithm is robust and efficient. The scheme converges globally under Wolfe line search, and it’s like is suitable in compressive sensing problems and M-tensor systems
- …