12,287 research outputs found

    Highly Homologous Filamin Polypeptides Have Different Distributions in Avian Slow and Fast Muscle Fibers

    Get PDF
    The high molecular weight actin-binding protein filamin is located at the periphery of the Z disk in the fast adult chicken pectoral muscle (Gomer, R. H., and E. Lazarides, 1981, Cell, 23: 524-532). In contrast, we have found that in the slow anterior latissimus dorsi (ALD) muscle, filamin was additionally located throughout the l band as judged by immunofluorescence with affinity-purified antibodies on myofibrils and cryosections. The Z line proteins desmin and alpha-actinin, however, had the same distribution in ALD as they do in pectoral muscle. Quantitation of filamin and actin from the two muscle types showed that there was approximately 10 times as much filamin per actin in ALD myofibrils as in pectoral myofibrils. Filamin immunoprecipitated from ALD had an electrophoretic mobility in SDS polyacrylamide gels identical to that of pectoral myofibril filamin and slightly greater than that of chicken gizzard filamin. Two-dimensional peptide maps of filamin immunoprecipitated and labeled with ^(125)I showed that ALD myofibril filamin was virtually identical to pectoral myofibril filamin and was distinct from chicken gizzard filamin

    Locality-aware parallel block-sparse matrix-matrix multiplication using the Chunks and Tasks programming model

    Full text link
    We present a method for parallel block-sparse matrix-matrix multiplication on distributed memory clusters. By using a quadtree matrix representation, data locality is exploited without prior information about the matrix sparsity pattern. A distributed quadtree matrix representation is straightforward to implement due to our recent development of the Chunks and Tasks programming model [Parallel Comput. 40, 328 (2014)]. The quadtree representation combined with the Chunks and Tasks model leads to favorable weak and strong scaling of the communication cost with the number of processes, as shown both theoretically and in numerical experiments. Matrices are represented by sparse quadtrees of chunk objects. The leaves in the hierarchy are block-sparse submatrices. Sparsity is dynamically detected by the matrix library and may occur at any level in the hierarchy and/or within the submatrix leaves. In case graphics processing units (GPUs) are available, both CPUs and GPUs are used for leaf-level multiplication work, thus making use of the full computing capacity of each node. The performance is evaluated for matrices with different sparsity structures, including examples from electronic structure calculations. Compared to methods that do not exploit data locality, our locality-aware approach reduces communication significantly, achieving essentially constant communication per node in weak scaling tests.Comment: 35 pages, 14 figure
    • …
    corecore