2,196 research outputs found

    Computing the Lambert W function in arbitrary-precision complex interval arithmetic

    Full text link
    We describe an algorithm to evaluate all the complex branches of the Lambert W function with rigorous error bounds in interval arithmetic, which has been implemented in the Arb library. The classic 1996 paper on the Lambert W function by Corless et al. provides a thorough but partly heuristic numerical analysis which needs to be complemented with some explicit inequalities and practical observations about managing precision and branch cuts.Comment: 16 pages, 4 figure

    Asymptotic approximations to the Hardy-Littlewood function

    Get PDF
    The function Q(x):=n1(1/n)sin(x/n)Q(x):=\sum_{n\ge 1} (1/n) \sin(x/n) was introduced by Hardy and Littlewood [10] in their study of Lambert summability, and since then it has attracted attention of many researchers. In particular, this function has made a surprising appearance in the recent disproof by Alzer, Berg and Koumandos [1] of a conjecture by Clark and Ismail [2]. More precisely, Alzer et. al. [1] have shown that the Clark and Ismail conjecture is true if and only if Q(x)π/2Q(x)\ge -\pi/2 for all x>0x>0. It is known that Q(x)Q(x) is unbounded in the domain x(0,)x \in (0,\infty) from above and below, which disproves the Clark and Ismail conjecture, and at the same time raises a natural question of whether we can exhibit at least one point xx for which Q(x)<π/2Q(x) < -\pi/2. This turns out to be a surprisingly hard problem, which leads to an interesting and non-trivial question of how to approximate Q(x)Q(x) for very large values of xx. In this paper we continue the work started by Gautschi in [7] and develop several approximations to Q(x)Q(x) for large values of xx. We use these approximations to find an explicit value of xx for which Q(x)<π/2Q(x)<-\pi/2.Comment: 16 pages, 3 figures, 2 table

    Sensitivity of Markov chains for wireless protocols

    Get PDF
    Network communication protocols such as the IEEE 802.11 wireless protocol are currently best modelled as Markov chains. In these situations we have some protocol parameters α\alpha, and a transition matrix P(α)P(\alpha) from which we can compute the steady state (equilibrium) distribution z(α)z(\alpha) and hence final desired quantities q(α)q(\alpha), which might be for example the throughput of the protocol. Typically the chain will have thousands of states, and a particular example of interest is the Bianchi chain defined later. Generally we want to optimise qq, perhaps subject to some constraints that also depend on the Markov chain. To do this efficiently we need the gradient of qq with respect to α\alpha, and therefore need the gradient of zz and other properties of the chain with respect to α\alpha. The matrix formulas available for this involve the so-called fundamental matrix, but are there approximate gradients available which are faster and still sufficiently accurate? In some cases BT would like to do the whole calculation in computer algebra, and get a series expansion of the equilibrium zz with respect to a parameter in PP. In addition to the steady state zz, the same questions arise for the mixing time and the mean hitting times. Two qualitative features that were brought to the Study Group’s attention were: * the transition matrix PP is large, but sparse. * the systems of linear equations to be solved are generally singular and need some additional normalisation condition, such as is provided by using the fundamental matrix. We also note a third highly important property regarding applications of numerical linear algebra: * the transition matrix PP is asymmetric. A realistic dimension for the matrix PP in the Bianchi model described below is 8064×8064, but on average there are only a few nonzero entries per column. Merely storing such a large matrix in dense form would require nearly 0.5GBytes using 64-bit floating point numbers, and computing its LU factorisation takes around 80 seconds on a modern microprocessor. It is thus highly desirable to employ specialised algorithms for sparse matrices. These algorithms are generally divided between those only applicable to symmetric matrices, the most prominent being the conjugate-gradient (CG) algorithm for solving linear equations, and those applicable to general matrices. A similar division is present in the literature on numerical eigenvalue problems

    Efficient resolution of the Colebrook equation

    Get PDF
    A robust, fast and accurate method for solving the Colebrook-like equations is presented. The algorithm is efficient for the whole range of parameters involved in the Colebrook equation. The computations are not more demanding than simplified approximations, but they are much more accurate. The algorithm is also faster and more robust than the Colebrook solution expressed in term of the Lambert W-function. Matlab and FORTRAN codes are provided

    Transverse Mercator with an accuracy of a few nanometers

    Full text link
    Implementations of two algorithms for the transverse Mercator projection are described; these achieve accuracies close to machine precision. One is based on the exact equations of Thompson and Lee and the other uses an extension of Krueger's series for the projection to higher order. The exact method provides an accuracy of 9 nm over the entire ellipsoid, while the errors in the series method are less than 5 nm within 3900 km of the central meridian. In each case, the meridian convergence and scale are also computed with similar accuracy. The speed of the series method is competitive with other less accurate algorithms and the exact method is about 5 times slower.Comment: LaTeX, 10 pages, 3 figures. Includes some revisions. Supplementary material is available at http://geographiclib.sourceforge.net/tm.htm

    Computing Stieltjes constants using complex integration

    Get PDF
    The generalized Stieltjes constants γ_n(v)\gamma\_n(v) are, up to a simple scaling factor, the Laurent series coefficients of the Hurwitz zeta function ζ(s,v)\zeta(s,v) about its unique pole s=1s = 1. In this work, we devise an efficient algorithm to compute these constants to arbitrary precision with rigorous error bounds, for the first time achieving this with low complexity with respect to the order~nn. Our computations are based on an integral representation with a hyperbolic kernel that decays exponentially fast. The algorithm consists of locating an approximate steepest descent contour and then evaluating the integral numerically in ball arithmetic using the Petras algorithm with a Taylor expansion for bounds near the saddle point. An implementation is provided in the Arb library. We can, for example, compute γ_n(1)\gamma\_n(1) to 1000 digits in a minute for any nn up to n=10100n=10^{100}. We also provide other interesting integral representations for γ_n(v)\gamma\_n(v), ζ(s)\zeta(s), ζ(s,v)\zeta(s,v), some polygamma functions and the Lerch transcendent

    Accuracy and Stability of Computing High-Order Derivatives of Analytic Functions by Cauchy Integrals

    Full text link
    High-order derivatives of analytic functions are expressible as Cauchy integrals over circular contours, which can very effectively be approximated, e.g., by trapezoidal sums. Whereas analytically each radius r up to the radius of convergence is equal, numerical stability strongly depends on r. We give a comprehensive study of this effect; in particular we show that there is a unique radius that minimizes the loss of accuracy caused by round-off errors. For large classes of functions, though not for all, this radius actually gives about full accuracy; a remarkable fact that we explain by the theory of Hardy spaces, by the Wiman-Valiron and Levin-Pfluger theory of entire functions, and by the saddle-point method of asymptotic analysis. Many examples and non-trivial applications are discussed in detail.Comment: Version 4 has some references and a discussion of other quadrature rules added; 57 pages, 7 figures, 6 tables; to appear in Found. Comput. Mat

    The exponentially convergent trapezoidal rule

    Get PDF
    It is well known that the trapezoidal rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with computational methods all across scientific computing, including algorithms related to inverse Laplace transforms, special functions, complex analysis, rational approximation, integral equations, and the computation of functions and eigenvalues of matrices and operators

    Atypical late-time singular regimes accurately diagnosed in stagnation-point-type solutions of 3D Euler flows

    Full text link
    We revisit, both numerically and analytically, the finite-time blowup of the infinite-energy solution of 3D Euler equations of stagnation-point-type introduced by Gibbon et al. (1999). By employing the method of mapping to regular systems, presented in Bustamante (2011) and extended to the symmetry-plane case by Mulungye et al. (2015), we establish a curious property of this solution that was not observed in early studies: before but near singularity time, the blowup goes from a fast transient to a slower regime that is well resolved spectrally, even at mid-resolutions of 5122.512^2. This late-time regime has an atypical spectrum: it is Gaussian rather than exponential in the wavenumbers. The analyticity-strip width decays to zero in a finite time, albeit so slowly that it remains well above the collocation-point scale for all simulation times t<T109000t < T^* - 10^{-9000}, where TT^* is the singularity time. Reaching such a proximity to singularity time is not possible in the original temporal variable, because floating point double precision (1016\approx 10^{-16}) creates a `machine-epsilon' barrier. Due to this limitation on the \emph{original} independent variable, the mapped variables now provide an improved assessment of the relevant blowup quantities, crucially with acceptable accuracy at an unprecedented closeness to the singularity time: $T^*- t \approx 10^{-140}.
    corecore