54 research outputs found

    SAWdoubler: a program for counting self-avoiding walks

    Full text link
    This article presents SAWdoubler, a package for counting the total number Z(N) of self-avoiding walks (SAWs) on a regular lattice by the length-doubling method, of which the basic concept has been published previously by us. We discuss an algorithm for the creation of all SAWs of length N, efficient storage of these SAWs in a tree data structure, and an algorithm for the computation of correction terms to the count Z(2N) for SAWs of double length, removing all combinations of two intersecting single-length SAWs. We present an efficient numbering of the lattice sites that enables exploitation of symmetry and leads to a smaller tree data structure; this numbering is by increasing Euclidean distance from the origin of the lattice. Furthermore, we show how the computation can be parallelised by distributing the iterations of the main loop of the algorithm over the cores of a multicore architecture. Experimental results on the 3D cubic lattice demonstrate that Z(28) can be computed on a dual-core PC in only 1 hour and 40 minutes, with a speedup of 1.56 compared to the single-core computation and with a gain by using symmetry of a factor of 26. We present results for memory use and show how the computation is made to fit in 4 Gbyte RAM. It is easy to extend the SAWdoubler software to other lattices; it is publicly available under the GNU LGPL license.Comment: 29 pages, 3 figure

    Minimizing Communication in the Multidimensional FFT

    Get PDF
    We present a parallel algorithm for the fast Fourier transform (FFT) in higher dimensions. This algorithm generalizes the cyclic-to-cyclic one-dimensional parallel algorithm to a cyclic-to-cyclic multidimensional parallel algorithm while retaining the property of needing only a single all-to-all communication step. This is under the constraint that we use at most √N processors for an FFT on an array with a total of N elements, irrespective of the dimension d or the shape of the array. The only assumption we make is that N is sufficiently composite. Our algorithm starts and ends in the same data distribution. We present our multidimensional implementation FFTU which utilizes the sequential FFTW program for its local FFTs, and which can handle any dimension d. We obtain experimental results for d ≀5 using MPI on up to 4096 cores of the supercomputer Snellius, comparing FFTU with the parallel FFTW program and with PFFT and heFFTe. These results show that FFTU is competitive with the state of the art and that it allows one to use a larger number of processors, while keeping communication limited to a single all-to-all operation. For arrays of size 10243 and 645, FFTU achieves a speedup of a factor 149 and 176, respectively, on 4096 processors

    Exact k-way sparse matrix partitioning

    Get PDF
    To minimize the communication in parallel sparse matrix-vector multiplication while maintaining load balance, we need to partition the sparse matrix optimally into k disjoint parts, which is an NP-complete problem. We present an exact algorithm based on the branch and bound (BB) method which partitions a matrix for any k, and we explore exact sparse matrix partitioning beyond bipartitioning. The algorithm has been implemented in a software package General Matrix Partitioner (GMP). We also present an integer linear programming (ILP) model for the same problem, based on a hypergraph formulation. We used both methods to determine optimal 2,3,4-way partitionings for a subset of small matrices from the SuiteSparse Matrix Collection. For k=2, BB outperforms ILP, whereas for larger k, ILP is superior. We used the results found by these exact methods for k=4 to analyse the performance of recursive bipartitioning (RB) with exact bipartitioning. For 46 matrices of the 89 matrices in our test set of matrices with less than 250 nonzeros, the communication volume determined by RB was optimal. For the other matrices, RB is able to find 4-way partitionings with communication volume close to the optimal volume

    Partitioning a call graph

    Get PDF
    Splitting a large software system into smaller and more manageable units has become an important problem for many organizations. The basic structure of a software system is given by a directed graph with vertices representing the programs of the system and arcs representing calls from one program to another. Generating a good partitioning into smaller modules becomes a minimization problem for the number of programs being called by external programs. First, we formulate an equivalent integer linear programming problem with 0–1 variables. theoretically, with this approach the problem can be solved to optimality, but this becomes very costly with increasing size of the software system. Second, we formulate the problem as a hypergraph partitioning problem. This is a heuristic method using a multilevel strategy, but it turns out to be very fast and to deliver solutions that are close to optimal

    Parallel Sparse LU Decomposition on a Mesh Network of Transputers

    Get PDF
    A parallel algorithm is presented for the LU decomposition of a general sparse matrix on a distributed-memory MIMD multiprocessor with a square mesh communication network. In the algorithm, matrix elements are assigned to processors according to the grid distribution. Each processor represents the nonzero elements of its part of the matrix by a local, ordered, two-dimensional linked-list data structure. The complexity of important operations on this data structure and on several others is analysed. At each step of the algorithm, a parallel search for a set of m compatible pivot elements is performed. The Markowitz counts of the pivot elements are close to minimum, to preserve the sparsity of the matrix. The pivot elements also satisfy a threshold criterion, to ensure numerical stability. The compatibility of the m pivots enables the simultaneous elimination of m pivot rows and m pivot columns in a rank-m update of the reduced matrix. Experimental results on a network of 400 transputers are presented for a set of test matrices from the Harwell–Boeing sparse matrix collection

    Open Problems in (Hyper)Graph Decomposition

    Full text link
    Large networks are useful in a wide range of applications. Sometimes problem instances are composed of billions of entities. Decomposing and analyzing these structures helps us gain new insights about our surroundings. Even if the final application concerns a different problem (such as traversal, finding paths, trees, and flows), decomposing large graphs is often an important subproblem for complexity reduction or parallelization. This report is a summary of discussions that happened at Dagstuhl seminar 23331 on "Recent Trends in Graph Decomposition" and presents currently open problems and future directions in the area of (hyper)graph decomposition

    Aquaporin-4 and GPRC5B: old and new players in controlling brain oedema

    Get PDF
    Brain oedema is a life-threatening complication of various neurological conditions. Understanding molecular mechanisms of brain volume regulation is critical for therapy development. Unique insight comes from monogenic diseases characterized by chronic brain oedema, of which megalencephalic leukoencephalopathy with subcortical cysts (MLC) is the prototype. Variants in MLC1 or GLIALCAM, encoding proteins involved in astrocyte volume regulation, are the main causes of MLC. In some patients, the genetic cause remains unknown. We performed genetic studies to identify novel gene variants in MLC patients, diagnosed by clinical and MRI features, without MLC1 or GLIALCAM variants. We determined subcellular localization of the related novel proteins in cells and in human brain tissue. We investigated functional consequences of the newly identified variants on volume regulation pathways using cell volume measurements, biochemical analysis and electrophysiology. We identified a novel homozygous variant in AQP4, encoding the water channel aquaporin-4, in two siblings, and two de novo heterozygous variants in GPRC5B, encoding the orphan G protein-coupled receptor GPRC5B, in three unrelated patients. The AQP4 variant disrupts membrane localization and thereby channel function. GPRC5B, like MLC1, GlialCAM and aquaporin-4, is expressed in astrocyte endfeet in human brain. Cell volume regulation is disrupted in GPRC5B patient-derived lymphoblasts. GPRC5B functionally interacts with ion channels involved in astrocyte volume regulation. In conclusion, we identify aquaporin-4 and GPRC5B as old and new players in genetic brain oedema. Our findings shed light on the protein complex involved in astrocyte volume regulation and identify GPRC5B as novel potentially druggable target for treating brain oedema

    Sparse Matrix Computations on Bulk Synchronous Parallel Computers

    Get PDF
    The Bulk Synchronous Parallel BSP programming model is studied in the context of sparse matrix computations. As a case study a BSP algorithm is developed for sparse Cholesky factorisation

    A two-dimensional data distribution method for parallel sparse matrix-vector multiplication

    Full text link
    Abstract. A new method is presented for distributing data in sparse matrix-vector multiplication. The method is two-dimensional, tries to minimise the true communication volume, and also tries to spread the computation and communication work evenly over the processors. The method starts with a recursive bipartitioning of the sparse matrix, each time splitting a rectangular matrix into two parts with a nearly equal number of nonzeros. The communication volume caused by the split is minimised. After the matrix partitioning, the input and output vectors are partitioned with the objective of minimising the maximum communication volume per processor. Experimental results of our implementation, Mondriaan, for a set of sparse test matrices show a reduction in communication compared to one-dimensional methods, and in general a good balance in the communication work
    • 

    corecore