1,546 research outputs found

    Energy-based comparison between the Fourier--Galerkin method and the finite element method

    Full text link
    The Fourier-Galerkin method (in short FFTH) has gained popularity in numerical homogenisation because it can treat problems with a huge number of degrees of freedom. Because the method incorporates the fast Fourier transform (FFT) in the linear solver, it is believed to provide an improvement in computational and memory requirements compared to the conventional finite element method (FEM). Here, we systematically compare these two methods using the energetic norm of local fields, which has the clear physical interpretation as being the error in the homogenised properties. This enables the comparison of memory and computational requirements at the same level of approximation accuracy. We show that the methods' effectiveness relies on the smoothness (regularity) of the solution and thus on the material coefficients. Thanks to its approximation properties, FEM outperforms FFTH for problems with jumps in material coefficients, while ambivalent results are observed for the case that the material coefficients vary continuously in space. FFTH profits from a good conditioning of the linear system, independent of the number of degrees of freedom, but generally needs more degrees of freedom to reach the same approximation accuracy. More studies are needed for other FFT-based schemes, non-linear problems, and dual problems (which require special treatment in FEM but not in FFTH).Comment: 24 pages, 10 figures, 2 table

    Error estimators and their analysis for CG, Bi-CG and GMRES

    Full text link
    We present an analysis of the uncertainty in the convergence of iterative linear solvers when using relative residue as a stopping criterion, and the resulting over/under computation for a given tolerance in error. This shows that error estimation is indispensable for efficient and accurate solution of moderate to high conditioned linear systems (κ>100\kappa>100), where κ\kappa is the condition number of the matrix. An O(1)\mathcal{O}(1) error estimator for iterations of the CG (Conjugate Gradient) algorithm was proposed more than two decades ago. Recently, an O(k2)\mathcal{O}(k^2) error estimator was described for the GMRES (Generalized Minimal Residual) algorithm which allows for non-symmetric linear systems as well, where kk is the iteration number. We suggest a minor modification in this GMRES error estimation for increased stability. In this work, we also propose an O(n)\mathcal{O}(n) error estimator for A-norm and l2l_{2} norm of the error vector in Bi-CG (Bi-Conjugate Gradient) algorithm. The robust performance of these estimates as a stopping criterion results in increased savings and accuracy in computation, as condition number and size of problems increase

    Probabilistic Numerics and Uncertainty in Computations

    Full text link
    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data has led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimisers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.Comment: Author Generated Postprint. 17 pages, 4 Figures, 1 Tabl

    Approximation of the scattering amplitude

    Get PDF
    The simultaneous solution of Ax=b and ATy=g is required in a number of situations. Darmofal and Lu have proposed a method based on the Quasi-Minimal residual algorithm (QMR). We will introduce a technique for the same purpose based on the LSQR method and show how its performance can be improved when using the Generalized LSQR method. We further show how preconditioners can be introduced to enhance the speed of convergence and discuss different preconditioners that can be used. The scattering amplitude gTx, a widely used quantity in signal processing for example, has a close connection to the above problem since x represents the solution of the forward problem and g is the right hand side of the adjoint system. We show how this quantity can be efficiently approximated using Gauss quadrature and introduce a Block-Lanczos process that approximates the scattering amplitude and which can also be used with preconditioners

    Gauss quadrature for matrix inverse forms with applications

    Get PDF
    We present a framework for accelerating a spectrum of machine learning algorithms that require computation of bilinear inverse forms u[superscript T] A[superscript −1]u, where A is a positive definite matrix and u a given vector. Our framework is built on Gauss-type quadrature and easily scales to large, sparse matrices. Further, it allows retrospective computation of lower and upper bounds on u[superscript T] > A[superscript −1]u, which in turn accelerates several algorithms. We prove that these bounds tighten iteratively and converge at a linear (geometric) rate. To our knowledge, ours is the first work to demonstrate these key properties of Gauss-type quadrature, which is a classical and deeply studied topic. We illustrate empirical consequences of our results by using quadrature to accelerate machine learning tasks involving determinantal point processes and submodular optimization, and observe tremendous speedups in several instances.Google (Research Award)National Science Foundation (U.S.) (CAREER Award 1553284
    • …
    corecore