196 research outputs found

    AN EXTENDED GREEN-SASAO HIERARCHY OF CANONICAL TERNARY GALOIS FORMS AND UNIVERSAL LOGIC MODULES

    Get PDF
    A new extended Green-Sasao hierarchy of families and forms with a new sub-family for many-valued Reed-Muller logic is introduced. Recently, two families of binary canonical Reed-Muller forms, called Inclusive Forms (IFs) and Generalized Inclusive Forms (GIFs) have been proposed, where the second family was the first to include all minimum Exclusive Sum-Of-Products (ESOPs). In this paper, we propose, analogously to the binary case, two general families of canonical ternary Reed-Muller forms, called Ternary Inclusive Forms (TIFs) and their generalization of Ternary Generalized Inclusive Forms (TGIFs), where the second family includes minimum Galois Field Sum-Of-Products (GFSOPs) over ternary Galois field GF(3). One of the basic motivations in this work is the application of these TIFs and TGIFs to find the minimum GFSOP for many-valued input-output functions within logic synthesis, where a GFSOP minimizer based on IF polarity can be used to minimize the many-valued GFSOP expression for any given function. The realization of the presented S/D trees using Universal Logic Modules (ULMs) is also introduced, whereULMs are complete systems that can implement all possible logic functions utilizing the corresponding S/D expansions of many-valuedShannon and Davio spectral transforms.   

    Logic Synthesis for Established and Emerging Computing

    Get PDF
    Logic synthesis is an enabling technology to realize integrated computing systems, and it entails solving computationally intractable problems through a plurality of heuristic techniques. A recent push toward further formalization of synthesis problems has shown to be very useful toward both attempting to solve some logic problems exactly--which is computationally possible for instances of limited size today--as well as creating new and more powerful heuristics based on problem decomposition. Moreover, technological advances including nanodevices, optical computing, and quantum and quantum cellular computing require new and specific synthesis flows to assess feasibility and scalability. This review highlights recent progress in logic synthesis and optimization, describing models, data structures, and algorithms, with specific emphasis on both design quality and emerging technologies. Example applications and results of novel techniques to established and emerging technologies are reported

    Cellular Automata

    Get PDF
    Modelling and simulation are disciplines of major importance for science and engineering. There is no science without models, and simulation has nowadays become a very useful tool, sometimes unavoidable, for development of both science and engineering. The main attractive feature of cellular automata is that, in spite of their conceptual simplicity which allows an easiness of implementation for computer simulation, as a detailed and complete mathematical analysis in principle, they are able to exhibit a wide variety of amazingly complex behaviour. This feature of cellular automata has attracted the researchers' attention from a wide variety of divergent fields of the exact disciplines of science and engineering, but also of the social sciences, and sometimes beyond. The collective complex behaviour of numerous systems, which emerge from the interaction of a multitude of simple individuals, is being conveniently modelled and simulated with cellular automata for very different purposes. In this book, a number of innovative applications of cellular automata models in the fields of Quantum Computing, Materials Science, Cryptography and Coding, and Robotics and Image Processing are presented

    New Data Structures and Algorithms for Logic Synthesis and Verification

    Get PDF
    The strong interaction between Electronic Design Automation (EDA) tools and Complementary Metal-Oxide Semiconductor (CMOS) technology contributed substantially to the advancement of modern digital electronics. The continuous downscaling of CMOS Field Effect Transistor (FET) dimensions enabled the semiconductor industry to fabricate digital systems with higher circuit density at reduced costs. To keep pace with technology, EDA tools are challenged to handle both digital designs with growing functionality and device models of increasing complexity. Nevertheless, whereas the downscaling of CMOS technology is requiring more complex physical design models, the logic abstraction of a transistor as a switch has not changed even with the introduction of 3D FinFET technology. As a consequence, modern EDA tools are fine tuned for CMOS technology and the underlying design methodologies are based on CMOS logic primitives, i.e., negative unate logic functions. While it is clear that CMOS logic primitives will be the ultimate building blocks for digital systems in the next ten years, no evidence is provided that CMOS logic primitives are also the optimal basis for EDA software. In EDA, the efficiency of methods and tools is measured by different metrics such as (i) the result quality, for example the performance of a digital circuit, (ii) the runtime and (iii) the memory footprint on the host computer. With the aim to optimize these metrics, the accordance to a specific logic model is no longer important. Indeed, the key to the success of an EDA technique is the expressive power of the logic primitives handling and solving the problem, which determines the capability to reach better metrics. In this thesis, we investigate new logic primitives for electronic design automation tools. We improve the efficiency of logic representation, manipulation and optimization tasks by taking advantage of majority and biconditional logic primitives. We develop synthesis tools exploiting the majority and biconditional expressiveness. Our tools show strong results as compared to state-of-the-art academic and commercial synthesis tools. Indeed, we produce the best results for several public benchmarks. On top of the enhanced synthesis power, our methods are the natural and native logic abstraction for circuit design in emerging nanotechnologies, where majority and biconditional logic are the primitive gates for physical implementation. We accelerate formal methods by (i) studying properties of logic circuits and (ii) developing new frameworks for logic reasoning engines. We prove non-trivial dualities for the property checking problem in logic circuits. Our findings enable sensible speed-ups in solving circuit satisfiability. We develop an alternative Boolean satisfiability framework based on majority functions. We prove that the general problem is still intractable but we show practical restrictions that can be solved efficiently. Finally, we focus on reversible logic where we propose a new equivalence checking approach. We exploit the invertibility of computation and the functionality of reversible gates in the formulation of the problem. This enables one order of magnitude speed up, as compared to the state-of-the-art solution. We argue that new approaches to solve EDA problems are necessary, as we have reached a point of technology where keeping pace with design goals is tougher than ever

    Statistical Physics of Design

    Full text link
    Modern life increasingly relies on complex products that perform a variety of functions. The key difficulty of creating such products lies not in the manufacturing process, but in the design process. However, design problems are typically driven by multiple contradictory objectives and different stakeholders, have no obvious stopping criteria, and frequently prevent construction of prototypes or experiments. Such ill-defined, or "wicked" problems cannot be "solved" in the traditional sense with optimization methods. Instead, modern design techniques are focused on generating knowledge about the alternative solutions in the design space. In order to facilitate such knowledge generation, in this dissertation I develop the "Systems Physics" framework that treats the emergent structures within the design space as physical objects that interact via quantifiable forces. Mathematically, Systems Physics is based on maximal entropy statistical mechanics, which allows both drawing conceptual analogies between design problems and collective phenomena and performing numerical calculations to gain quantitative understanding. Systems Physics operates via a Model-Compute-Learn loop, with each step refining our thinking of design problems. I demonstrate the capabilities of Systems Physics in two very distinct case studies: Naval Engineering and self-assembly. For the Naval Engineering case, I focus on an established problem of arranging shipboard systems within the available hull space. I demonstrate the essential trade-off between minimizing the routing cost and maximizing the design flexibility, which can lead to abrupt phase transitions. I show how the design space can break into several locally optimal architecture classes that have very different robustness to external couplings. I illustrate how the topology of the shipboard functional network enters a tight interplay with the spatial constraints on placement. For the self-assembly problem, I show that the topology of self-assembled structures can be reliably encoded in the properties of the building blocks so that the structure and the blocks can be jointly designed. The work presented here provides both conceptual and quantitative advancements. In order to properly port the language and the formalism of statistical mechanics to the design domain, I critically re-examine such foundational ideas as system-bath coupling, coarse graining, particle distinguishability, and direct and emergent interactions. I show that the design space can be packed into a special information structure, a tensor network, which allows seamless transition from graphical visualization to sophisticated numerical calculations. This dissertation provides the first quantitative treatment of the design problem that is not reduced to the narrow goals of mathematical optimization. Using statistical mechanics perspective allows me to move beyond the dichotomy of "forward" and "inverse" design and frame design as a knowledge generation process instead. Such framing opens the way to further studies of the design space structures and the time- and path-dependent phenomena in design. The present work also benefits from, and contributes to the philosophical interpretations of statistical mechanics developed by the soft matter community in the past 20 years. The discussion goes far beyond physics and engages with literature from materials science, naval engineering, optimization problems, design theory, network theory, and economic complexity.PHDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163133/1/aklishin_1.pd

    Prospects for Declarative Mathematical Modeling of Complex Biological Systems

    Full text link
    Declarative modeling uses symbolic expressions to represent models. With such expressions one can formalize high-level mathematical computations on models that would be difficult or impossible to perform directly on a lower-level simulation program, in a general-purpose programming language. Examples of such computations on models include model analysis, relatively general-purpose model-reduction maps, and the initial phases of model implementation, all of which should preserve or approximate the mathematical semantics of a complex biological model. The potential advantages are particularly relevant in the case of developmental modeling, wherein complex spatial structures exhibit dynamics at molecular, cellular, and organogenic levels to relate genotype to multicellular phenotype. Multiscale modeling can benefit from both the expressive power of declarative modeling languages and the application of model reduction methods to link models across scale. Based on previous work, here we define declarative modeling of complex biological systems by defining the operator algebra semantics of an increasingly powerful series of declarative modeling languages including reaction-like dynamics of parameterized and extended objects; we define semantics-preserving implementation and semantics-approximating model reduction transformations; and we outline a "meta-hierarchy" for organizing declarative models and the mathematical methods that can fruitfully manipulate them

    Dielectric and Spin Susceptibilities using Density Functional Theory

    Get PDF
    The response of a system to some external perturbation is almost ubiquitous in Physics. The application of perturbation theory through an electronic structure method such as Density Functional Theory has had significant contributions over the last few decades. Its implementation, aptly named Density Functional Perturbation Theory has seen use in a number of ab initio calculations on a variety of physical properties of materials which depend on their lattice-dynamical behaviour. Specific heats, thermal expansion, infrared, Raman and optical spectra are to name just a few. Understanding the complex phenomena has significantly corroborated the current understanding of the quantum picture of solids. The Sternheimer scheme falls under the umbrella of methods to compute response functions in Time-Dependent Density Functional Theory. Initially developed to study the electronic polarisability, it is now commonly utilised in the field of lattice dynamics to study phonons and related crystal properties. The Sternheimer equation has also been used to model spin wave excitations by computation of the magnetic susceptibility. The poles of the susceptibility are known to correspond to magnon excitations and these computations have been corroborated by experimental inelastic neutron scattering data. These excitations are of a transverse nature, in that they involve fluctuations of the magnetisation perpendicular to a chosen z axis. The lesser-known longitudinal excitations involve fluctuations of the magnetisation along z, an investigation of collective modes present in transition metals may be carried out from self-consistent computations of the Sternheimer equation. The dielectric response is an important linear response function in solid-state physics. Its computation from first principles provides an invaluable tool in the characterisation of optical properties and can be compared to the experimental method of spectroscopic ellipsometry.The work in this thesis concerns the implementation of the Sternheimer method in computing the dynamical response from either an external plane wave or spin-polarised perturbation. These response functions are the dielectric and spin (magnetisation) susceptibilities respectively.The scheme to compute the frequency-dependent dielectric response is implemented in a plane-wave pseudopotential DFT package. Calculations are performed on the semiconducting systems of Silicon, Gallium Arsenide, Zinc Oxide and perovskite Methylammonium Lead Triiodide. The overall shape of the dielectric spectra is in good agreement with spectroscopic ellipsometry data, however, there is a shift which is attributed to the limitations of DFT.The scheme developed to compute longitudinal spin dynamics is applied to the transitionmetal systems of body-centred cubic Iron and face-centred cubic Nickel. In a similar manner to another first principles approach, a single dominant peak is shown to be present in the magnetisation channel with the charge dynamics being effectively null in comparison. However, the exact position of these peaks is not in agreement with the other approach, a discussion is made regarding difficulties pertaining to self-consistent optimisation

    AUTOMATING DATA-LAYOUT DECISIONS IN DOMAIN-SPECIFIC LANGUAGES

    Get PDF
    A long-standing challenge in High-Performance Computing (HPC) is the simultaneous achievement of programmer productivity and hardware computational efficiency. The challenge has been exacerbated by the onset of multi- and many-core CPUs and accelerators. Only a few expert programmers have been able to hand-code domain-specific data transformations and vectorization schemes needed to extract the best possible performance on such architectures. In this research, we examined the possibility of automating these methods by developing a Domain-Specific Language (DSL) framework. Our DSL approach extends C++14 by embedding into it a high-level data-parallel array language, and by using a domain-specific compiler to compile to hybrid-parallel code. We also implemented an array index-space transformation algebra within this high-level array language to manipulate array data-layouts and data-distributions. The compiler introduces a novel method for SIMD auto-vectorization based on array data-layouts. Our new auto-vectorization technique is shown to outperform the default auto-vectorization strategy by up to 40% for stencil computations. The compiler also automates distributed data movement with overlapping of local compute with remote data movement using polyhedral integer set analysis. Along with these main innovations, we developed a new technique using C++ template metaprogramming for developing embedded DSLs using C++. We also proposed a domain-specific compiler intermediate representation that simplifies data flow analysis of abstract DSL constructs. We evaluated our framework by constructing a DSL for the HPC grand-challenge domain of lattice quantum chromodynamics. Our DSL yielded performance gains of up to twice the flop rate over existing production C code for selected kernels. This gain in performance was obtained while using less than one-tenth the lines of code. The performance of this DSL was also competitive with the best hand-optimized and hand-vectorized code, and is an order of magnitude better than existing production DSLs.Doctor of Philosoph

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first
    corecore