501 research outputs found

    Sparse matrix methods based on orthogonality and conjugacy

    Get PDF
    A matrix having a high percentage of zero elements is called spares. In the solution of systems of linear equations or linear least squares problems involving large sparse matrices, significant saving of computer cost can be achieved by taking advantage of the sparsity. The conjugate gradient algorithm and a set of related algorithms are described

    Sparse matrix methods research using the CSM testbed software system

    Get PDF
    Research is described on sparse matrix techniques for the Computational Structural Mechanics (CSM) Testbed. The primary objective was to compare the performance of state-of-the-art techniques for solving sparse systems with those that are currently available in the CSM Testbed. Thus, one of the first tasks was to become familiar with the structure of the testbed, and to install some or all of the SPARSPAK package in the testbed. A suite of subroutines to extract from the data base the relevant structural and numerical information about the matrix equations was written, and all the demonstration problems distributed with the testbed were successfully solved. These codes were documented, and performance studies comparing the SPARSPAK technology to the methods currently in the testbed were completed. In addition, some preliminary studies were done comparing some recently developed out-of-core techniques with the performance of the testbed processor INV

    Solving large sparse eigenvalue problems on supercomputers

    Get PDF
    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed

    Finite-size scaling of eigenstate thermalization

    Full text link
    According to the eigenstate thermalization hypothesis (ETH), even isolated quantum systems can thermalize because the eigenstate-to-eigenstate fluctuations of typical observables vanish in the limit of large systems. Of course, isolated systems are by nature finite, and the main way of computing such quantities is through numerical evaluation for finite-size systems. Therefore, the finite-size scaling of the fluctuations of eigenstate expectation values is a central aspect of the ETH. In this work, we present numerical evidence that for generic non-integrable systems these fluctuations scale with a universal power law D1/2D^{-1/2} with the dimension DD of the Hilbert space. We provide heuristic arguments, in the same spirit as the ETH, to explain this universal result. Our results are based on the analysis of three families of models, and several observables for each model. Each family includes integrable members, and we show how the system size where the universal power law becomes visible is affected by the proximity to integrability.Comment: 9 pages, 8 figures; accepted for publication in Phys. Rev.

    A stochastic method to compute the L2L^2 localisation landscape

    Full text link
    The L2L^2 localisation landscape of L. Herviou and J. H. Bardarson is a generalisation of the localisation landscape of M. Filoche and S. Mayboroda. We propose a stochastic method to compute the L2L^2 localisation landscape that enables the calculation of landscapes using sparse matrix methods. We also propose an energy filtering of the L2L^2 landscape which can be used to focus on eigenstates with energies in any chosen range of the energy spectrum. We demonstrate the utility of these suggestions by applying the L2L^2 landscape to Anderson's model of localisation in one and two dimensions, and also to localisation in a model of the quantum Hall effect.Comment: 9 pages, 6 figure

    Sympiler: Transforming Sparse Matrix Codes by Decoupling Symbolic Analysis

    Full text link
    Sympiler is a domain-specific code generator that optimizes sparse matrix computations by decoupling the symbolic analysis phase from the numerical manipulation stage in sparse codes. The computation patterns in sparse numerical methods are guided by the input sparsity structure and the sparse algorithm itself. In many real-world simulations, the sparsity pattern changes little or not at all. Sympiler takes advantage of these properties to symbolically analyze sparse codes at compile-time and to apply inspector-guided transformations that enable applying low-level transformations to sparse codes. As a result, the Sympiler-generated code outperforms highly-optimized matrix factorization codes from commonly-used specialized libraries, obtaining average speedups over Eigen and CHOLMOD of 3.8X and 1.5X respectively.Comment: 12 page

    Large Disorder Renormalization Group Study of the Anderson Model of Localization

    Full text link
    We describe a large disorder renormalization group (LDRG) method for the Anderson model of localization in one dimension which decimates eigenstates based on the size of their wavefunctions rather than their energy. We show that our LDRG scheme flows to infinite disorder, and thus becomes asymptotically exact. We use it to obtain the disorder-averaged inverse participation ratio and density of states for the entire spectrum. A modified scheme is formulated for higher dimensions, which is found to be less efficient, but capable of improvement

    Computing the Jacobian in spatial models: an applied survey.

    Get PDF
    Despite attempts to get around the Jacobian in fitting spatial econometric models by using GMM and other approximations, it remains a central problem for maximum likelihood estimation. In principle, and for smaller data sets, the use of the eigenvalues of the spatial weights matrix provides a very rapid and satisfactory resolution. For somewhat larger problems, including those induced in spatial panel and dyadic (network) problems, solving the eigenproblem is not as attractive, and a number of alternatives have been proposed. This paper will survey chosen alternatives, and comment on their relative usefulness.Spatial autoregression; Maximum likelihood estimation; Jacobian computation; Econometric software.
    corecore