51 research outputs found

    In-Memory Computing Using Formal Methods and Paths-Based Logic

    Get PDF
    The continued scaling of the CMOS device has been largely responsible for the increase in computational power and consequent technological progress over the last few decades. However, the end of Dennard scaling has interrupted this era of sustained exponential growth in computing performance. Indeed, we are quickly reaching an impasse in the form of limitations in the lithographic processes used to fabricate CMOS processes and, even more dire, we are beginning to face fundamental physical phenomena, such as quantum tunneling, that are pervasive at the nanometer scale. Such phenomena manifests itself in prohibitively high leakage currents and process variations, leading to inaccurate computations. As a result, there has been a surge of interest in computing architectures that can replace the traditional CMOS transistor-based methods. This thesis is a thorough investigation of how computations can be performed on one such architecture, called a crossbar. The methods proposed in this document apply to any crossbar consisting of two-terminal connective devices. First, we demonstrate how paths of electric current between two wires can be used as design primitives in a crossbar. We then leverage principles from the field of formal methods, in particular the area of bounded model checking, to automate the synthesis of crossbar designs for computing arithmetic operations. We demonstrate that our approach yields circuits that are state-of-the-art in terms of the number of operations required to perform a computation. Finally, we look at the benefits of using a 3D crossbar for computation; that is, a crossbar consisting of multiple layers of interconnects. A novel 3D crossbar computing paradigm is proposed for solving the Boolean matrix multiplication and transitive closure problems and we show how this paradigm can be utilized, with small modifications, in the XPoint crossbar memory architecture that was recently announced by Intel

    Automated Synthesis of Unconventional Computing Systems

    Get PDF
    Despite decades of advancements, modern computing systems which are based on the von Neumann architecture still carry its shortcomings. Moore\u27s law, which had substantially masked the effects of the inherent memory-processor bottleneck of the von Neumann architecture, has slowed down due to transistor dimensions nearing atomic sizes. On the other hand, modern computational requirements, driven by machine learning, pattern recognition, artificial intelligence, data mining, and IoT, are growing at the fastest pace ever. By their inherent nature, these applications are particularly affected by communication-bottlenecks, because processing them requires a large number of simple operations involving data retrieval and storage. The need to address the problems associated with conventional computing systems at the fundamental level has given rise to several unconventional computing paradigms. In this dissertation, we have made advancements for automated syntheses of two types of unconventional computing paradigms: in-memory computing and stochastic computing. In-memory computing circumvents the problem of limited communication bandwidth by unifying processing and storage at the same physical locations. The advent of nanoelectronic devices in the last decade has made in-memory computing an energy-, area-, and cost-effective alternative to conventional computing. We have used Binary Decision Diagrams (BDDs) for in-memory computing on memristor crossbars. Specifically, we have used Free-BDDs, a special class of binary decision diagrams, for synthesizing crossbars for flow-based in-memory computing. Stochastic computing is a re-emerging discipline with several times smaller area/power requirements as compared to conventional computing systems. It is especially suited for fault-tolerant applications like image processing, artificial intelligence, pattern recognition, etc. We have proposed a decision procedures-based iterative algorithm to synthesize Linear Finite State Machines (LFSM) for stochastically computing non-linear functions such as polynomials, exponentials, and hyperbolic functions

    New Approaches for Memristive Logic Computations

    Get PDF
    Over the past five decades, exponential advances in device integration in microelectronics for memory and computation applications have been observed. These advances are closely related to miniaturization in integrated circuit technologies. However, this miniaturization is reaching the physical limit (i.e., the end of Moore\u27s Law). This miniaturization is also causing a dramatic problem of heat dissipation in integrated circuits. Additionally, approaching the physical limit of semiconductor devices in fabrication process increases the delay of moving data between computing and memory units hence decreasing the performance. The market requirements for faster computers with lower power consumption can be addressed by new emerging technologies such as memristors. Memristors are non-volatile and nanoscale devices and can be used for building memory arrays with very high density (extending Moore\u27s law). Memristors can also be used to perform stateful logic operations where the same devices are used for logic and memory, enabling in-memory logic. In other words, memristor-based stateful logic enables a new computing paradigm of combining calculation and memory units (versus von Neumann architecture of separating calculation and memory units). This reduces the delays between processor and memory by eliminating redundant reloading of reusable values. In addition, memristors consume low power hence can decrease the large amounts of power dissipation in silicon chips hitting their size limit. The primary focus of this research is to develop the circuit implementations for logic computations based on memristors. These implementations significantly improve the performance and decrease the power of digital circuits. This dissertation demonstrates in-memory computing using novel memristive logic gates, which we call volistors (voltage-resistor gates). Volistors capitalize on rectifying memristors, i.e., a type of memristors with diode-like behavior, and use voltage at input and resistance at output. In addition, programmable diode gates, i.e., another type of logic gates implemented with rectifying memristors are proposed. In programmable diode gates, memristors are used only as switches (unlike volistor gates which utilize both memory and switching characteristics of the memristors). The programmable diode gates can be used with CMOS gates to increase the logic density. As an example, a circuit implementation for calculating logic functions in generalized ESOP (Exclusive-OR-Sum-of-Products) form and multilevel XOR network are described. As opposed to the stateful logic gates, a combination of both proposed logic styles decreases the power and improves the performance of digital circuits realizing two-level logic functions Sum-of-Products or Product-of-Sums. This dissertation also proposes a general 3-dimentional circuit architecture for in-memory computing. This circuit consists of a number of stacked crossbar arrays which all can simultaneously be used for logic computing. These arrays communicate through CMOS peripheral circuits

    Computers from plants we never made. Speculations

    Full text link
    We discuss possible designs and prototypes of computing systems that could be based on morphological development of roots, interaction of roots, and analog electrical computation with plants, and plant-derived electronic components. In morphological plant processors data are represented by initial configuration of roots and configurations of sources of attractants and repellents; results of computation are represented by topology of the roots' network. Computation is implemented by the roots following gradients of attractants and repellents, as well as interacting with each other. Problems solvable by plant roots, in principle, include shortest-path, minimum spanning tree, Voronoi diagram, α\alpha-shapes, convex subdivision of concave polygons. Electrical properties of plants can be modified by loading the plants with functional nanoparticles or coating parts of plants of conductive polymers. Thus, we are in position to make living variable resistors, capacitors, operational amplifiers, multipliers, potentiometers and fixed-function generators. The electrically modified plants can implement summation, integration with respect to time, inversion, multiplication, exponentiation, logarithm, division. Mathematical and engineering problems to be solved can be represented in plant root networks of resistive or reaction elements. Developments in plant-based computing architectures will trigger emergence of a unique community of biologists, electronic engineering and computer scientists working together to produce living electronic devices which future green computers will be made of.Comment: The chapter will be published in "Inspired by Nature. Computing inspired by physics, chemistry and biology. Essays presented to Julian Miller on the occasion of his 60th birthday", Editors: Susan Stepney and Andrew Adamatzky (Springer, 2017
    corecore