5 research outputs found

    Many-body localization in a quasiperiodic Fibonacci chain

    Full text link
    We study the many-body localization (MBL) properties of a chain of interacting fermions subject to a quasiperiodic potential such that the non-interacting chain is always delocalized and displays multifractality. Contrary to naive expectations, adding interactions in this systems does not enhance delocalization, and a MBL transition is observed. Due to the local properties of the quasiperiodic potential, the MBL phase presents specific features, such as additional peaks in the density distribution. We furthermore investigate the fate of multifractality in the ergodic phase for low potential values. Our analysis is based on exact numerical studies of eigenstates and dynamical properties after a quench

    Shift-invert diagonalization of large many-body localizing spin chains

    Get PDF
    We provide a pedagogical review on the calculation of highly excited eigenstates of disordered interacting quantum systems which can undergo a many-body localization (MBL) transition, using shift-invert exact diagonalization. We also provide an example code at https://bitbucket.org/dluitz/sinvert_mbl/. Through a detailed analysis of the simulational parameters of the random field Heisenberg spin chain, we provide a practical guide on how to perform efficient computations. We present data for mid-spectrum eigenstates of spin chains of sizes up to L=26L=26. This work is also geared towards readers with interest in efficiency of parallel sparse linear algebra techniques that will find a challenging application in the MBL problem

    Sparse Approximate Multifrontal Factorization with Butterfly Compression for High Frequency Wave Equations

    Full text link
    We present a fast and approximate multifrontal solver for large-scale sparse linear systems arising from finite-difference, finite-volume or finite-element discretization of high-frequency wave equations. The proposed solver leverages the butterfly algorithm and its hierarchical matrix extension for compressing and factorizing large frontal matrices via graph-distance guided entry evaluation or randomized matrix-vector multiplication-based schemes. Complexity analysis and numerical experiments demonstrate O(Nlog⁡2N)\mathcal{O}(N\log^2 N) computation and O(N)\mathcal{O}(N) memory complexity when applied to an N×NN\times N sparse system arising from 3D high-frequency Helmholtz and Maxwell problems

    Supernodes ordering to enhance Block Low-Rank compression in sparse direct solvers

    Get PDF
    Solving sparse linear systems appears in many scientific applications, and sparse direct linear solvers are widely used for their robustness. Still, both time and memory complexities limit the use of direct methods to solve larger problems, while the amount of memory available per computational units is decreasing in modern architectures. In order to tackle this problem, low-rank compression techniques have been introduced in direct solvers to compress large dense blocks appearing in the symbolic factorization. In this paper, we consider the Block Low-Rank (BLR) compression format and address the problem of clustering unknowns that come from separators issued from the nested dissection process. We show that methods considering only intra-separators connectivity (i.e., k-way or recursive bisection) as well as methods managing only interaction between separators have limitations. We propose a new strategy that considers interactions between multiple levels of the elimination tree of the nested dissection. This strategy tries to both reduce the number of off-diagonal blocks in the symbolic structure and increase the compression ratio of the large separators. We demonstrate how this new method enhances the BLR strategies in the sparse direct supernodal solver PaStiX.La résolution de systÚmes linéaires creux est utilisée dans de nombreuses applications scientifiques, et les solveurs directs creux sont réputés pour leur robustesse. Néanmoins, les complexités en temps et en mémoire limitent l'utilisation de ces méthode pour résoudre des problÚmes de trÚs grande taille. Afin de s'attaquer à ce problÚme, des techniques de compression de rang faible ont été introduites dans les solveurs directs pour compresser les blocs denses apparaissant lors de l'étape de factorisation symbolique. Dans cette étude, nous considérons le format de compression Bloc Low-Rank (BLR) et nous abordons le problÚme du regroupement 'clustering' des inconnues dans les séparateurs issus de la méthode de dissection emboitée. Nous mettons en évidence les limitations des méthodes ne prenant en compte que les connectivités internes à un séparateur (c.-à-d. k-way ou dissection récursive) ainsi que des méthodes qui n'optimisent que les interactions entre séparateurs. Nous proposons une nouvelle stratégie qui prend en compte les interactions entre plusieurs niveaux de l'arbre d'élimination de la dissection emboitée. Cette stratégie essaye de réduire à la fois le nombre de blocs extra-diagonaux dans la structure symbolique et d'augmenter le taux de compression des séparateurs les plus gros. Nous démontrons que cette nouvelle méthode permet d'améliorer la compression BLR dans le solveur supernodal direct creux PaStiX

    Sparse Supernodal Solver Using Block Low-Rank Compression: design, performance and analysis

    Get PDF
    This paper presents two approaches using a Block Low-Rank (BLR) compressiontechnique to reduce the memory footprint and/or the time-to-solution of the sparse supernodalsolver PaStiX. This flat, non-hierarchical, compression method allows to take advantage of thelow-rank property of the blocks appearing during the factorization of sparse linear systems, whichcome from the discretization of partial differential equations. The first approach, called MinimalMemory, illustrates the maximum memory gain that can be obtained with the BLR compressionmethod, while the second approach, called Just-In-Time, mainly focuses on reducing the com-putational complexity and thus the time-to-solution. Singular Value Decomposition (SVD) andRank-Revealing QR (RRQR), as compression kernels, are both compared in terms of factorizationtime, memory consumption, as well as numerical properties. Experiments on a single node with24 threads and 128 GB of memory are performed to evaluate the potential of both strategies. Ona set of matrices from real-life problems, we demonstrate a memory footprint reduction of up to 4times using the Minimal Memory strategy and a computational time speedup of up to 3.5 timeswith the Just-In-Time strategy. Then, we study the impact of configuration parameters of theBLR solver that allowed us to solve a 3D laplacian of 36 million unknowns a single node, while thefull-rank solver stopped at 8 million due to memory limitation
    corecore