3,237 research outputs found

    Error bounds of certain Gaussian quadrature formulae

    Get PDF
    AbstractWe study the kernel of the remainder term of Gauss quadrature rules for analytic functions with respect to one class of Bernstein–Szegö weight functions. The location on the elliptic contours where the modulus of the kernel attains its maximum value is investigated. This leads to effective error bounds of the corresponding Gauss quadratures

    Error bounds of certain Gaussian quadrature formulae

    Get PDF
    We study the kernel of the remainder term of Gauss quadrature rules for analytic functions with respect to one class of Bernstein-Szego weight functions. The location on the elliptic contours where the modulus of the kernel attains its maximum value is investigated. This leads to effective error bounds of the corresponding Gauss quadratures

    Is Gauss quadrature better than Clenshaw-Curtis?

    Get PDF
    We consider the question of whether Gauss quadrature, which is very famous, is more powerful than the much simpler Clenshaw-Curtis quadrature, which is less well-known. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following Elliott and O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of log((z+1)/(z1))\log((z+1)/(z-1)) in the complex plane. Gauss quadrature corresponds to Pad\'e approximation at z=z=\infty. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at z=z=\infty is only half as high, but which is nevertheless equally accurate near [1,1][-1,1]

    The geometric mean of two matrices from a computational viewpoint

    Full text link
    The geometric mean of two matrices is considered and analyzed from a computational viewpoint. Some useful theoretical properties are derived and an analysis of the conditioning is performed. Several numerical algorithms based on different properties and representation of the geometric mean are discussed and analyzed and it is shown that most of them can be classified in terms of the rational approximations of the inverse square root functions. A review of the relevant applications is given
    corecore