2,352 research outputs found

    Julia: A Fresh Approach to Numerical Computing

    Get PDF
    Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as "laws of nature" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment, and 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts. We introduce the Julia programming language and its design --- a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can have machine performance without sacrificing human convenience.Comment: 37 page

    Dynamics of the symmetric eigenvalue problem with shift strategies

    Full text link
    A common algorithm for the computation of eigenvalues of real symmetric tridiagonal matrices is the iteration of certain special maps FσF_\sigma called shifted QRQR steps. Such maps preserve spectrum and a natural common domain is TΛ{\cal T}_\Lambda, the manifold of real symmetric tridiagonal matrices conjugate to the diagonal matrix Λ\Lambda. More precisely, a (generic) shift s \in \RR defines a map Fs:TΛ→TΛF_s: {\cal T}_\Lambda \to {\cal T}_\Lambda. A strategy \sigma: {\cal T}_\Lambda \to \RR specifies the shift to be applied at TT so that Fσ(T)=Fσ(T)(T)F_\sigma(T) = F_{\sigma(T)}(T). Good shift strategies should lead to fast deflation: some off-diagonal coordinate tends to zero, allowing for reducing of the problem to submatrices. For topological reasons, continuous shift strategies do not obtain fast deflation; many standard strategies are indeed discontinuous. Practical implementation only gives rise systematically to bottom deflation, convergence to zero of the lowest off-diagonal entry b(T)b(T). For most shift strategies, convergence to zero of b(T)b(T) is cubic, ∣b(Fσ(T))∣=Θ(∣b(T)∣k)|b(F_\sigma(T))| = \Theta(|b(T)|^k) for k=3k = 3. The existence of arithmetic progressions in the spectrum of TT sometimes implies instead quadratic convergence, k=2k = 2. The complete integrability of the Toda lattice and the dynamics at non-smooth points are central to our discussion. The text does not assume knowledge of numerical linear algebra.Comment: 22 pages, 4 figures. This preprint borrows heavily from the unpublished preprint arXiv:0912.3376 but is adapted for a different audienc

    LU factorization with panel rank revealing pivoting and its communication avoiding version

    Get PDF
    We present the LU decomposition with panel rank revealing pivoting (LU_PRRP), an LU factorization algorithm based on strong rank revealing QR panel factorization. LU_PRRP is more stable than Gaussian elimination with partial pivoting (GEPP). Our extensive numerical experiments show that the new factorization scheme is as numerically stable as GEPP in practice, but it is more resistant to pathological cases and easily solves the Wilkinson matrix and the Foster matrix. We also present CALU_PRRP, a communication avoiding version of LU_PRRP that minimizes communication. CALU_PRRP is based on tournament pivoting, with the selection of the pivots at each step of the tournament being performed via strong rank revealing QR factorization. CALU_PRRP is more stable than CALU, the communication avoiding version of GEPP. CALU_PRRP is also more stable in practice and is resistant to pathological cases on which GEPP and CALU fail.Comment: No. RR-7867 (2012

    Accurate and Efficient Expression Evaluation and Linear Algebra

    Full text link
    We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By "accurate" we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: Most of our results will use the so-called Traditional Model (TM). We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high accuracy algorithm or a proof that none exists. When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as x+y+zx+y+z, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case. Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl

    An interactive visualisation for all 2x2 real matrices, with applications to conveying the dynamics of iterative eigenvalue algorithms

    Full text link
    We present two interactive visualisations of 2x2 real matrices, which we call v1 and v2. v1 is only valid for PSD matrices, and uses the spectral theorem in a trivial way -- we use it as a warm-up. By contrast, v2 is valid for *all* 2x2 real matrices, and is based on the lesser known theory of Lie Sphere Geometry. We show that the dynamics of iterative eigenvalue algorithms can be illustrated using both. v2 has the advantage that it simultaneously depicts many properties of a matrix, all of which are relevant to the study of eigenvalue algorithms. Examples of the properties of a matrix that v2 can depict are its Jordan Normal Form and orthogonal similarity class, as well as whether it is triangular, symmetric or orthogonal. Despite its richness, using v2 interactively seems rather intuitive

    Kronecker products and the RSK correspondence

    Full text link
    The starting point for this work is an identity that relates the number of minimal matrices with prescribed 1-marginals and coefficient sequence to a linear combination of Kronecker coefficients. In this paper we provide a bijection that realizes combinatorially this identity. As a consequence we obtain an algorithm that to each minimal matrix associates a minimal component, with respect to the dominance order, in a Kronecker product, and a combinatorial description of the corresponding Kronecker coefficient in terms of minimal matrices and tableau insertion. Our bijection follows from a generalization of the dual RSK correspondence to 3-dimensional binary matrices, which we state and prove. With the same tools we also obtain a generalization of the RSK correspondence to 3-dimensional integer matrices
    • …
    corecore