1,164 research outputs found

    Routing for analog chip designs at NXP Semiconductors

    Get PDF
    During the study week 2011 we worked on the question of how to automate certain aspects of the design of analog chips. Here we focused on the task of connecting different blocks with electrical wiring, which is particularly tedious to do by hand. For digital chips there is a wealth of research available for this, as in this situation the amount of blocks makes it hopeless to do the design by hand. Hence, we set our task to finding solutions that are based on the previous research, as well as being tailored to the specific setting given by NXP. This resulted in an heuristic approach, which we presented at the end of the week in the form of a protoype tool. In this report we give a detailed account of the ideas we used, and describe possibilities to extend the approach

    SAWdoubler: a program for counting self-avoiding walks

    Full text link
    This article presents SAWdoubler, a package for counting the total number Z(N) of self-avoiding walks (SAWs) on a regular lattice by the length-doubling method, of which the basic concept has been published previously by us. We discuss an algorithm for the creation of all SAWs of length N, efficient storage of these SAWs in a tree data structure, and an algorithm for the computation of correction terms to the count Z(2N) for SAWs of double length, removing all combinations of two intersecting single-length SAWs. We present an efficient numbering of the lattice sites that enables exploitation of symmetry and leads to a smaller tree data structure; this numbering is by increasing Euclidean distance from the origin of the lattice. Furthermore, we show how the computation can be parallelised by distributing the iterations of the main loop of the algorithm over the cores of a multicore architecture. Experimental results on the 3D cubic lattice demonstrate that Z(28) can be computed on a dual-core PC in only 1 hour and 40 minutes, with a speedup of 1.56 compared to the single-core computation and with a gain by using symmetry of a factor of 26. We present results for memory use and show how the computation is made to fit in 4 Gbyte RAM. It is easy to extend the SAWdoubler software to other lattices; it is publicly available under the GNU LGPL license.Comment: 29 pages, 3 figure

    DNA electrophoresis studied with the cage model

    Get PDF
    The cage model for polymer reptation, proposed by Evans and Edwards, and its recent extension to model DNA electrophoresis, are studied by numerically exact computation of the drift velocities for polymers with a length L of up to 15 monomers. The computations show the Nernst-Einstein regime (v ~ E) followed by a regime where the velocity decreases exponentially with the applied electric field strength. In agreement with de Gennes' reptation arguments, we find that asymptotically for large polymers the diffusion coefficient D decreases quadratically with polymer length; for the cage model, the proportionality coefficient is DL^2=0.175(2). Additionally we find that the leading correction term for finite polymer lengths scales as N^{-1/2}, where N=L-1 is the number of bonds.Comment: LaTeX (cjour.cls), 15 pages, 6 figures, added correctness proof of kink representation approac

    On the efficient parallel computation of Legendre transforms

    Get PDF
    In this article, we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the accuracy, efficiency, and scalability of our implementation. The algorithms were implemented in ANSI C using the BSPlib communications library. We also present a new algorithm for computing the cosine transform of two vectors at the same time

    The Medicago genome provides insight into the evolution of rhizobial symbioses

    Get PDF
    Legumes (Fabaceae or Leguminosae) are unique among cultivated plants for their ability to carry out endosymbiotic nitrogen fixation with rhizobial bacteria, a process that takes place in a specialized structure known as the nodule. Legumes belong to one of the two main groups of eurosids, the Fabidae, which includes most species capable of endosymbiotic nitrogen fixation1. Legumes comprise several evolutionary lineages derived from a common ancestor 60 million years ago (Myr ago). Papilionoids are the largest clade, dating nearly to the origin of legumes and containing most cultivated species2. Medicago truncatula is a long-established model for the study of legume biology. Here we describe the draft sequence of the M. truncatula euchromatin based on a recently completed BAC assembly supplemented with Illumina shotgun sequence, together capturing ~94% of all M. truncatula genes. A whole-genome duplication (WGD) approximately 58 Myr ago had a major role in shaping the M. truncatula genome and thereby contributed to the evolution of endosymbiotic nitrogen fixation. Subsequent to the WGD, the M. truncatula genome experienced higher levels of rearrangement than two other sequenced legumes, Glycine max and Lotus japonicus. M. truncatula is a close relative of alfalfa (Medicago sativa), a widely cultivated crop with limited genomics tools and complex autotetraploid genetics. As such, the M. truncatula genome sequence provides significant opportunities to expand alfalfa’s genomic toolbo

    Partitioning 3D space for parallel many-particle stimulations

    Get PDF
    In a common approach for parallel processing applied to simulations of manyparticle systems with short-ranged interactions and uniform density, the simulation cell is partitioned into domains of equal shape and size, each of which is assigned to one processor. We compare the commonly used simple-cubic (SC) domain shape to domain shapes chosen as the Voronoi cells of BCC and FCC lattices. The latter two are found to result in superior partitionings with respect to communication overhead. Other domain shapes, relevant for a small number of processors, are also discussed. The higher eciency with BCC and FCC partitionings is demonstrated in simulations of the sillium model for amorphous silicon

    A medium-grain method for fast 2D bipartitioning of sparse matrices

    Get PDF
    We present a new hypergraph-based method, the medium-grain method, for solving the sparse matrix partitioning problem. This problem arises when distributing data for parallel sparse matrix-vector multiplication. In the medium-grain method, each matrix nonzero is assigned to either a row group or a column group, and these groups are represented by vertices of the hypergraph. For an m x n sparse matrix, the resulting hypergraph has m + n vertices and m + n hyperedges. Furthermore, we present an iterative refinement procedure for improvement of a given partitioning, based on the medium-grain method, which can be applied as a cheap but effective postprocessing step after any partitioning method. The medium-grain method is able to produce fully two-dimensional bipartitionings, but its computational complexity equals that of one-dimensional methods. Experimental results for a large set of sparse test matrices show that the medium-grain method with iterative refinement produces bipartitionings with lower communication volume compared to current state-of-the-art methods, and is faster at producing them
    • …
    corecore