3,328 research outputs found

    Efficient Generation of Stable Planar Cages for Chemistry

    Full text link
    In this paper we describe an algorithm which generates all colored planar maps with a good minimum sparsity from simple motifs and rules to connect them. An implementation of this algorithm is available and is used by chemists who want to quickly generate all sound molecules they can obtain by mixing some basic components.Comment: 17 pages, 7 figures. Accepted at the 14th International Symposium on Experimental Algorithms (SEA 2015

    The predictor-adaptor paradigm : automation of custom layout by flexible design

    Get PDF

    Graph layout for applications in compiler construction

    Get PDF
    We address graph visualization from the viewpoint of compiler construction. Most data structures in compilers are large, dense graphs such as annotated control flow graph, syntax trees, dependency graphs. Our main focus is the animation and interactive exploration of these graphs. Fast layout heuristics and powerful browsing methods are needed. We give a survey of layout heuristics for general directed and undirected graphs and present the browsing facilities that help to manage large structured graph

    On Semantic Word Cloud Representation

    Full text link
    We study the problem of computing semantic-preserving word clouds in which semantically related words are close to each other. While several heuristic approaches have been described in the literature, we formalize the underlying geometric algorithm problem: Word Rectangle Adjacency Contact (WRAC). In this model each word is associated with rectangle with fixed dimensions, and the goal is to represent semantically related words by ensuring that the two corresponding rectangles touch. We design and analyze efficient polynomial-time algorithms for some variants of the WRAC problem, show that several general variants are NP-hard, and describe a number of approximation algorithms. Finally, we experimentally demonstrate that our theoretically-sound algorithms outperform the early heuristics

    Transistor-Level Layout of Integrated Circuits

    Get PDF
    In this dissertation, we present the toolchain BonnCell and its underlying algorithms. It has been developed in close cooperation with the IBM Corporation and automatically generates the geometry for functional groups of 2 to approximately 50 transistors. Its input consists of a set of transistors, including properties like their sizes and their types, a specification of their connectivity, and parameters to flexibly control the technological framework as well as the algorithms' behavior. Using this data, the tool computes a detailed geometric realization of the circuit as polygonal shapes on 16 layers. To this end, a placement routine configures the transistors and arranges them in the plane, which is the main subject of this thesis. Subsequently, a routing engine determines wires connecting the transistors to ensure the circuit's desired functionality. We propose and analyze a family of algorithms that arranges sets of transistors in the plane such that a multi-criteria target function is optimized. The primary goal is to obtain solutions that are as compact as possible because chip area is a valuable resource in modern techologies. In addition to the core algorithms we formulate variants that handle particularly structured instances in a suitable way. We will show that for 90% of the instances in a representative test bed provided by IBM, BonnCell succeeds to generate fully functional layouts including the placement of the transistors and a routing of their interconnections. Moreover, BonnCell is in wide use within IBM's groups that are concerned with transistor-level layout - a task that has been performed manually before our automation was available. Beyond the processing of isolated test cases, two large-scale examples for applications of the tool in the industry will be presented: On the one hand the initial design phase of a large SRAM unit required only half of the expected 3 month period, on the other hand BonnCell could provide valuable input aiding central decisions in the early concept phase of the new 14 nm technology generation

    GM : a gate matrix layout generator

    Get PDF

    Efficient Mapping of Neural Network Models on a Class of Parallel Architectures.

    Get PDF
    This dissertation develops a formal and systematic methodology for efficient mapping of several contemporary artificial neural network (ANN) models on k-ary n-cube parallel architectures (KNC\u27s). We apply the general mapping to several important ANN models including feedforward ANN\u27s trained with backpropagation algorithm, radial basis function networks, cascade correlation learning, and adaptive resonance theory networks. Our approach utilizes a parallel task graph representing concurrent operations of the ANN model during training. The mapping of the ANN is performed in two steps. First, the parallel task graph of the ANN is mapped to a virtual KNC of compatible dimensionality. This involves decomposing each operation into its atomic tasks. Second, the dimensionality of the virtual KNC architecture is recursively reduced through a sequence of transformations until a desired metric is optimized. We refer to this process as folding the virtual architecture. The optimization criteria we consider in this dissertation are defined in terms of the iteration time of the algorithm on the folded architecture. If necessary, the mapping scheme may utilize a subset of the processors of a given KNC architecture if it results in the most efficient simulation. A unique feature of our mapping is that it systematically selects an appropriate degree of parallelism leading to a highly efficient realization of the ANN model on KNC architectures. A novel feature of our work is its ability to efficiently map unit-allocating ANN\u27s. These networks possess a dynamic structure which grows during training. We present a highly efficient scheme for simulating such networks on existing KNC parallel architectures. We assume an upper bound on size of the neural network We perform the folding such that the iteration time of the largest network is minimized. We show that our mapping leads to near-optimal simulation of smaller instances of the neural network. In addition, based on our mapping no data migration or task rescheduling is needed as the size of network grows
    • …
    corecore