229 research outputs found

    Reliable chip design from low powered unreliable components

    Get PDF
    The pace of technological improvement of the semiconductor market is driven by Moore’s Law, enabling chip transistor density to double every two years. The transistors would continue to decline in cost and size but increase in power. The continuous transistor scaling and extremely lower power constraints in modern Very Large Scale Integrated(VLSI) chips can potentially supersede the benefits of the technology shrinking due to reliability issues. As VLSI technology scales into nanoscale regime, fundamental physical limits are approached, and higher levels of variability, performance degradation, and higher rates of manufacturing defects are experienced. Soft errors, which traditionally affected only the memories, are now also resulting in logic circuit reliability degradation. A solution to these limitations is to integrate reliability assessment techniques into the Integrated Circuit(IC) design flow. This thesis investigates four aspects of reliability driven circuit design: a)Reliability estimation; b) Reliability optimization; c) Fault-tolerant techniques, and d) Delay degradation analysis. To guide the reliability driven synthesis and optimization of combinational circuits, highly accurate probability based reliability estimation methodology christened Conditional Probabilistic Error Propagation(CPEP) algorithm is developed to compute the impact of gate failures on the circuit output. CPEP guides the proposed rewriting based logic optimization algorithm employing local transformations. The main idea behind this methodology is to replace parts of the circuit with functionally equivalent but more reliable counterparts chosen from a precomputed subset of Negation-Permutation-Negation(NPN) classes of 4-variable functions. Cut enumeration and Boolean matching driven by reliability-aware optimization algorithm are used to identify the best possible replacement candidates. Experiments on a set of MCNC benchmark circuits and 8051 functional microcontroller units indicate that the proposed framework can achieve up to 75% reduction of output error probability. On average, about 14% SER reduction is obtained at the expense of very low area overhead of 6.57% that results in 13.52% higher power consumption. The next contribution of the research describes a novel methodology to design fault tolerant circuitry by employing the error correction codes known as Codeword Prediction Encoder(CPE). Traditional fault tolerant techniques analyze the circuit reliability issue from a static point of view neglecting the dynamic errors. In the context of communication and storage, the study of novel methods for reliable data transmission under unreliable hardware is an increasing priority. The idea of CPE is adapted from the field of forward error correction for telecommunications focusing on both encoding aspects and error correction capabilities. The proposed Augmented Encoding solution consists of computing an augmented codeword that contains both the codeword to be transmitted on the channel and extra parity bits. A Computer Aided Development(CAD) framework known as CPE simulator is developed providing a unified platform that comprises a novel encoder and fault tolerant LDPC decoders. Experiments on a set of encoders with different coding rates and different decoders indicate that the proposed framework can correct all errors under specific scenarios. On average, about 1000 times improvement in Soft Error Rate(SER) reduction is achieved. Last part of the research is the Inverse Gaussian Distribution(IGD) based delay model applicable to both combinational and sequential elements for sub-powered circuits. The Probability Density Function(PDF) based delay model accurately captures the delay behavior of all the basic gates in the library database. The IGD model employs these necessary parameters, and the delay estimation accuracy is demonstrated by evaluating multiple circuits. Experiments results indicate that the IGD based approach provides a high matching against HSPICE Monte Carlo simulation results, with an average error less than 1.9% and 1.2% for the 8-bit Ripple Carry Adder(RCA), and 8-bit De-Multiplexer(DEMUX) and Multiplexer(MUX) respectively

    Readiness of Quantum Optimization Machines for Industrial Applications

    Full text link
    There have been multiple attempts to demonstrate that quantum annealing and, in particular, quantum annealing on quantum annealing machines, has the potential to outperform current classical optimization algorithms implemented on CMOS technologies. The benchmarking of these devices has been controversial. Initially, random spin-glass problems were used, however, these were quickly shown to be not well suited to detect any quantum speedup. Subsequently, benchmarking shifted to carefully crafted synthetic problems designed to highlight the quantum nature of the hardware while (often) ensuring that classical optimization techniques do not perform well on them. Even worse, to date a true sign of improved scaling with the number of problem variables remains elusive when compared to classical optimization techniques. Here, we analyze the readiness of quantum annealing machines for real-world application problems. These are typically not random and have an underlying structure that is hard to capture in synthetic benchmarks, thus posing unexpected challenges for optimization techniques, both classical and quantum alike. We present a comprehensive computational scaling analysis of fault diagnosis in digital circuits, considering architectures beyond D-wave quantum annealers. We find that the instances generated from real data in multiplier circuits are harder than other representative random spin-glass benchmarks with a comparable number of variables. Although our results show that transverse-field quantum annealing is outperformed by state-of-the-art classical optimization algorithms, these benchmark instances are hard and small in the size of the input, therefore representing the first industrial application ideally suited for testing near-term quantum annealers and other quantum algorithmic strategies for optimization problems.Comment: 22 pages, 12 figures. Content updated according to Phys. Rev. Applied versio

    In-Memory Computing by Using Nano-ionic Memristive Devices

    Get PDF
    By reaching to the CMOS scaling limitation based on the Moore’s law and due to the increasing disparity between the processing units and memory performance, the quest is continued to find a suitable alternative to replace the conventional technology. The recently discovered two terminal element, memristor, is believed to be one of the most promising candidates for future very large scale integrated systems. This thesis is comprised of two main parts, (Part I) modeling the memristor devices, and (Part II) memristive computing. The first part is presented in one chapter and the second part of the thesis contains five chapters. The basics and fundamentals regarding the memristor functionality and memristive computing are presented in the introduction chapter. A brief detail of these two main parts is as follows: Part I: Modeling- This part presents an accurate model based on the charge transport mechanisms for nanoionic memristor devices. The main current mechanism in metal/insulator/metal (MIM) structures are assessed, a physic-based model is proposed and a SPICE model is presented and tested for four different fabricated devices. An accuracy comparison is done for various models for Ag/TiO2/ITO fabricated device. Also, the functionality of the model is tested for various input signals. Part II: Memristive computing- Memristive computing is about utilizing memristor to perform computational tasks. This part of the thesis is divided into neuromorphic, analog and digital computing schemes with memristor devices. – Neuromorphic computing- Two chapters of this thesis are about biologicalinspired memristive neural networks using STDP-based learning mechanism. The memristive implementation of two well-known spiking neuron models, Hudgkin-Huxley and Morris-Lecar, are assessed and utilized in the proposed memristive network. The synaptic connections are also memristor devices in this design. Unsupervised pattern classification tasks are done to ensure the right functionality of the system. – Analog computing- Memristor has analog memory property as it can be programmed to different memristance values. A novel memristive analog adder is designed by Continuous Valued Number System (CVNS) scheme and its circuit is comprised of addition and modulo blocks. The proposed analog adder design is explained and its functionality is tested for various numbers. It is shown that the CVNS scheme is compatible with memristive design and the environment resolution can be adjusted by the memristance ratio of the memristor devices. – Digital computing- Two chapters are dedicated for digital computing. In the first one, a development over IMPLY-based logic with memristor is provided to implement a 4:2 compressor circuit. In the second chapter, A novel resistive over a novel mirrored memristive crossbar platform. Different logic gates are designed with the proposed memristive logic method and the simulations are provided with Cadence to prove the functionality of the logic. The logic implementation over a mirrored memristive crossbars is also assessed

    Unconventional programming: non-programmable systems

    Get PDF
    Die Forschung aus dem Bereich der unkonventionellen und natürlichen Informationsverarbeitungssysteme verspricht kontrollierbare Rechenprozesse in ungewöhnlichen Medien zu realisieren, zum Beispiel auf der molekularen Ebene oder in Bakterienkolonien. Vielversprechende Eigenschaften dieser Systeme sind das nichtlineare Verhalten und der hohe Verknüpfungsgrad der beteiligten Komponenten in Analogie zu Neuronen im Gehirn. Da aber Programmierung meist auf Prinzipien wie Modularisierung, Kapselung und Vorhersagbarkeit beruht sind diese Systeme oft schwer- bzw. unprogrammierbar. Im Gegensatz zu vielen Arbeiten über unkonventionelle Rechensysteme soll in dieser Arbeit aber nicht hauptsächlich nach neuen rechnenden Systemen und Anwendungen dieser gesucht werden. Stattdessen konzentriert sich diese Dissertation auf unkonventionelle Programmieransätze, die sowohl für unkonventionelle Computer als auch für herkommliche digitale Rechner neue Perspektiven eröffnen sollen. Hauptsächlich in Bezug auf ein Modell künstlicher chemischer Neuronen werden Ansätze für unkonventionelle Programmierverfahren, basierend auf Evolutionären Algorithmen, Informationstheorie und Selbstorganisation bis hin zur Selbstassemblierung untersucht. Ein spezielles Augenmerk liegt dabei auf dem Problem der Symbolkodierung: Oft gibt es mehrere oder sogar unendlich viele Möglichkeiten, Informationen in den Zuständen eines komplexen dynamischen Systems zu kodieren. In Neuronalen Netzen gibt es unter anderem die Spikefrequenz aber auch Populationskodes. In Abhängigkeit von den weiteren Eigenschaften des Systems, beispielsweise von der Informationsverarbeitungsaufgabe und dem gewünschten Eingabe-Ausgabeverhalten dürften sich verschiedene Kodierungen als unterschiedlich nützlich erweisen. Daher werden hier Methoden betrachtet um die verschiedene Symbolkodierungmethoden zu evaluieren, zu analysieren und um nach neuen, geeigneten Kodierungen zu suchen.Unconventional and natural computing research offers controlled information modification processes in uncommon media, for example on the molecular scale or in bacteria colonies. Promising aspects of such systems are often the non-linear behavior and the high connectivity of the involved information processing components in analogy to neurons in the nervous system. Unfortunately, such properties make the system behavior hard to understand, hard to predict and thus also hard to program with common engineering principles like modularization and composition, leading to the term of non-programmable systems. In contrast to many unconventional computing works that are often focused on finding novel computing substrates and potential applications, unconventional programming approaches for such systems are the theme of this thesis: How can new programming concepts open up new perspectives for unconventional but hopefully also for traditional, digital computing systems? Mostly based on a model of artificial wet chemical neurons, different unconventional programming approaches from evolutionary algorithms, information theory, self-organization and self-assembly are explored. A particular emphasis is given on the problem of symbol encodings: Often there are multiple or even an unlimited number of possibilities to encode information in the phase space of dynamical systems, e.g. spike frequencies or population coding in neural networks. But different encodings will probably be differently useful, dependent on the system properties, the information transformation task and the desired connectivity to other systems. Hence methods are investigated that can evaluate, analyse as well as identify suitable symbol encoding schemes

    Evolving Graphs by Graph Programming

    Get PDF
    Graphs are a ubiquitous data structure in computer science and can be used to represent solutions to difficult problems in many distinct domains. This motivates the use of Evolutionary Algorithms to search over graphs and efficiently find approximate solutions. However, existing techniques often represent and manipulate graphs in an ad-hoc manner. In contrast, rule-based graph programming offers a formal mechanism for describing relations over graphs. This thesis proposes the use of rule-based graph programming for representing and implementing genetic operators over graphs. We present the Evolutionary Algorithm Evolving Graphs by Graph Programming and a number of its extensions which are capable of learning stateful and stateless digital circuits, symbolic expressions and Artificial Neural Networks. We demonstrate that rule-based graph programming may be used to implement new and effective constraint-respecting mutation operators and show that these operators may strictly generalise others found in the literature. Through our proposal of Semantic Neutral Drift, we accelerate the search process by building plateaus into the fitness landscape using domain knowledge of equivalence. We also present Horizontal Gene Transfer, a mechanism whereby graphs may be passively recombined without disrupting their fitness. Through rigorous evaluation and analysis of over 20,000 independent executions of Evolutionary Algorithms, we establish numerous benefits of our approach. We find that on many problems, Evolving Graphs by Graph Programming and its variants may significantly outperform other approaches from the literature. Additionally, our empirical results provide further evidence that neutral drift aids the efficiency of evolutionary search

    Design of Discrete-time Chaos-Based Systems for Hardware Security Applications

    Get PDF
    Security of systems has become a major concern with the advent of technology. Researchers are proposing new security solutions every day in order to meet the area, power and performance specifications of the systems. The additional circuit required for security purposes can consume significant area and power. This work proposes a solution which utilizes discrete-time chaos-based logic gates to build a system which addresses multiple hardware security issues. The nonlinear dynamics of chaotic maps is leveraged to build a system that mitigates IC counterfeiting, IP piracy, overbuilding, disables hardware Trojan insertion and enables authentication of connecting devices (such as IoT and mobile). Chaos-based systems are also used to generate pseudo-random numbers for cryptographic applications.The chaotic map is the building block for the design of discrete-time chaos-based oscillator. The analog output of the oscillator is converted to digital value using a comparator in order to build logic gates. The logic gate is reconfigurable since different parameters in the circuit topology can be altered to implement multiple Boolean functions using the same system. The tuning parameters are control input, bifurcation parameter, iteration number and threshold voltage of the comparator. The proposed system is a hybrid between standard CMOS logic gates and reconfigurable chaos-based logic gates where original gates are replaced by chaos-based gates. The system works in two modes: logic locking and authentication. In logic locking mode, the goal is to ensure that the system achieves logic obfuscation in order to mitigate IC counterfeiting. The secret key for logic locking is made up of the tuning parameters of the chaotic oscillator. Each gate has 10-bit key which ensures that the key space is large which exponentially increases the computational complexity of any attack. In authentication mode, the aim of the system is to provide authentication of devices so that adversaries cannot connect to devices to learn confidential information. Chaos-based computing system is susceptible to process variation which can be leveraged to build a chaos-based PUF. The proposed system demonstrates near ideal PUF characteristics which means systems with large number of primary outputs can be used for authenticating devices

    Computation with photochromic memory

    Get PDF
    Unconventional computing is an area of research in which novel materials and paradigms are utilised to implement computation and data storage. This includes attempts to embed computation into biological systems, which could allow the observation and modification of living processes. This thesis explores the storage and computational capabilities of a biocompatible light-sensitive (photochromic) molecular switch (NitroBIPS) that has the potential to be embedded into both natural and synthetic biological systems. To achieve this, NitroBIPS was embedded in a (PDMS) polymer matrix and an optomechanical setup was built in order to expose the sample to optical stimulation and record fluorescent emission. NitroBIPS has two stable forms - one fluorescent and one non-fluorescent - and can be switched between the two via illumination with ultraviolet or visible light. By exposing NitroBIPS samples to specific stimulus pulse sequences and recording the intensity of fluorescence emission, data could be stored in registers and logic gates and circuits implemented. In addition, by moving the area of illumination, sub-regions of the sample could be addressed. This enabled parallel registers, Turing machine tapes and elementary cellular automata to be implemented. It has been demonstrated, therefore, that photochromic molecular memory can be used to implement conventional universal computation in an unconventional manner. Furthermore, because registers, Turing machine tapes, logic gates, logic circuits and elementary cellular automata all utilise the same samples and same hardware, it has been shown that photochromic computational devices can be dynamically repurposed. NitroBIPS and related molecules have been shown elsewhere to be capable of modifying many biological processes. This includes inhibiting protein binding, perturbing lipid membranes and binding to DNA in a manner that is dependent on the molecule's form. The implementation of universal computation demonstrated in this thesis could, therefore, be used in combination with these biological manipulations as key components within synthetic biology systems or in order to monitor and control natural biological processes

    Processes and diagrams: an integrated and multidisciplinary approach for the education of quantum information science

    Get PDF
    The background to this thesis is the παιδέια , the education. To educate is a dialecti- cal process that moves from an abstract line of thought, through scientifically designed techniques, into concrete action; and vice versa. We believe that educating today means enabling teachers first and their students second, to be able to read and interpret the complexity of phenomena, to teach them a model for observing this complexity, describing it, analyzing it and, finally, making it their own. In this thesis, we attempt to make sense of these needs by describing an integrated and multidisciplinary pathway, whose diagram- matic language pushes towards the search for a universal approach to science. An initial educational contribution is thus made to the understanding of the dialectic between disciplines: theoretical physics, experimental physics, computer science, mathe- matics and mathematical logic are presented in their mutual influence, in an attempt to clarify the informational viewpoint on modern physics. The search for this dialectic for educational purposes is, in our opinion, the most significant contribution of the present work. To address this issue, we sought to build a community of practice on the topics of the second quantum revolution. Guided by the Model of Educational Reconstruction (MER), we built a first course for teacher professional development that would enable teachers to be introduced to quantum computation and quantum communication. The emergence and development of quantum technologies provides the impetus for a deep conceptual change: “a paradigm shift from quantum theory as a theory of microscopic matter to quantum theory as a framework for technological applications and information processing”. This shift is supported, theoretically, by the informational interpretation of the postulates of quantum mechanics: preparation, transformation and measurement are reinterpreted com- putationally as the encoding, processing and decoding of information; and vice versa. In this interpretation, what changes between classical and quantum theory? From a logical point of view, the transition from bit to qubit, from a physical point of view, the laws of composition of systems. We therefore present monoidal categories as a natural theoretical framework for the description of physical systems and processes for quantum and non- quantum computation and communication, demonstrating how this language is suitable for an integrated and multidisciplinary approach. The cultural impact of the proposal, the fruitful interaction between researchers in physics education and those in the area of theoretical research, and the passion of some teachers made it possible to start a collaboration to build an educational sequence for students. The result of this collaboration is a teaching leaning sequence on quantum technologies for students, led by the MER and based on inquiry-based learning and the modelling- based teaching. Supported by these methodological frameworks, we produced lessons and worksheets all along the way that had the dual task of supporting teachers’ work and students’ learning. They also made it possible to experimentally verify the positive and critical effects of the proposal. The instructional materials constructed, the data analysis and the constant monitoring with the teachers involved, determined the development of a second course for teacher professional development, inspired by the first, based entirely on research. We hope that this attempt at integrated and multidisciplinary approach for the education of quantum information science, based on the concept of compositionality and the diagrammatic model, can be increased and provide inspiration for future educational paths in other disciplines as well
    • …
    corecore