11 research outputs found

    Simulation and implementation of novel deep learning hardware architectures for resource constrained devices

    Get PDF
    Corey Lammie designed mixed signal memristive-complementary metal–oxide–semiconductor (CMOS) and field programmable gate arrays (FPGA) hardware architectures, which were used to reduce the power and resource requirements of Deep Learning (DL) systems; both during inference and training. Disruptive design methodologies, such as those explored in this thesis, can be used to facilitate the design of next-generation DL systems

    Organic electrochemical networks for biocompatible and implantable machine learning: Organic bioelectronic beyond sensing

    Get PDF
    How can the brain be such a good computer? Part of the answer lies in the astonishing number of neurons and synapses that process electrical impulses in parallel. Part of it must be found in the ability of the nervous system to evolve in response to external stimuli and grow, sharpen, and depress synaptic connections. However, we are far from understanding even the basic mechanisms that allow us to think, be aware, recognize patterns, and imagine. The brain can do all this while consuming only around 20 Watts, out-competing any human-made processor in terms of energy-efficiency. This question is of particular interest in a historical era and technological stage where phrases like machine learning and artificial intelligence are more and more widespread, thanks to recent advances produced in the field of computer science. However, brain-inspired computation is today still relying on algorithms that run on traditional silicon-made, digital processors. Instead, the making of brain-like hardware, where the substrate itself can be used for computation and it can dynamically update its electrical pathways, is still challenging. In this work, I tried to employ organic semiconductors that work in electrolytic solutions, called organic mixed ionic-electronic conductors (OMIECs) to build hardware capable of computation. Moreover, by exploiting an electropolymerization technique, I could form conducting connections in response to electrical spikes, in analogy to how synapses evolve when the neuron fires. After demonstrating artificial synapses as a potential building block for neuromorphic chips, I shifted my attention to the implementation of such synapses in fully operational networks. In doing so, I borrowed the mathematical framework of a machine learning approach known as reservoir computing, which allows computation with random (neural) networks. I capitalized my work on demonstrating the possibility of using such networks in-vivo for the recognition and classification of dangerous and healthy heartbeats. This is the first demonstration of machine learning carried out in a biological environment with a biocompatible substrate. The implications of this technology are straightforward: a constant monitoring of biological signals and fluids accompanied by an active recognition of the presence of malign patterns may lead to a timely, targeted and early diagnosis of potentially mortal conditions. Finally, in the attempt to simulate the random neural networks, I faced difficulties in the modeling of the devices with the state-of-the-art approach. Therefore, I tried to explore a new way to describe OMIECs and OMIECs-based devices, starting from thermodynamic axioms. The results of this model shine a light on the mechanism behind the operation of the organic electrochemical transistors, revealing the importance of the entropy of mixing and suggesting new pathways for device optimization for targeted applications

    Nanoparticle devices for brain-inspired computing.

    Get PDF
    The race towards smarter and more efficient computers is at the core of our technology industry and is driven by the rise of more and more complex computational tasks. However, due to limitations such as the increasing costs and inability to indefinitely keep shrinking conventional computer chips, novel hardware architectures are needed. Brain-inspired, or neuromorphic, hardware has attracted great interest over the last decades. The human brain can easily carry out a multitude of tasks such as pattern recognition, classification, abstraction, and motor control with high efficiency and extremely low power consumption. Therefore, it seems logical to take inspiration from the brain to develop new systems and hardware that can perform interesting computational tasks faster and more efficiently. Devices based on percolating nanoparticle networks (PNNs) have shown many features that are promising for the creation of low-power neuromorphic systems. PNN devices exhibit many emergent brain-like properties and complex electrical activity under stimulation. However, so far PNNs have been studied using simple two-contact devices and relatively slow measuring systems. This limits the capabilities of PNNs for computing applications and questions such as whether the brain-like properties continue to be observed at faster timescales, or what are the limits for operation of PNN devices remain unanswered. This thesis explores the design, fabrication, and testing of the first successful multi- contact PNN devices. A novel and simple fabrication technique for the creation of working electrical contacts to nanoparticle networks is presented. Extensive testing of the multi-contact PNN devices demonstrated that electrical stimulation of multiple input contacts leads to complex switching activity. Complex switching activity exhibited different patterns of switching behaviour with events occurring on all contacts, on few contacts, or only on a single contact. The device behaviour is investigated for the first time at microsecond timescales, and it is found that the PNNs exhibit stochastic spiking behaviour that originates in single tunnel gaps and is strikingly similar to that observed in biological neurons. The stochastic spiking behaviour of PNNs is then used for the generation of high quality random numbers which are fundamental for encryption and security. Together the results presented in this thesis pave the way for the use of PNNs for brain-inspired computing and secure information processing

    Algorithm and Hardware Co-design for Learning On-a-chip

    Get PDF
    abstract: Machine learning technology has made a lot of incredible achievements in recent years. It has rivalled or exceeded human performance in many intellectual tasks including image recognition, face detection and the Go game. Many machine learning algorithms require huge amount of computation such as in multiplication of large matrices. As silicon technology has scaled to sub-14nm regime, simply scaling down the device cannot provide enough speed-up any more. New device technologies and system architectures are needed to improve the computing capacity. Designing specific hardware for machine learning is highly in demand. Efforts need to be made on a joint design and optimization of both hardware and algorithm. For machine learning acceleration, traditional SRAM and DRAM based system suffer from low capacity, high latency, and high standby power. Instead, emerging memories, such as Phase Change Random Access Memory (PRAM), Spin-Transfer Torque Magnetic Random Access Memory (STT-MRAM), and Resistive Random Access Memory (RRAM), are promising candidates providing low standby power, high data density, fast access and excellent scalability. This dissertation proposes a hierarchical memory modeling framework and models PRAM and STT-MRAM in four different levels of abstraction. With the proposed models, various simulations are conducted to investigate the performance, optimization, variability, reliability, and scalability. Emerging memory devices such as RRAM can work as a 2-D crosspoint array to speed up the multiplication and accumulation in machine learning algorithms. This dissertation proposes a new parallel programming scheme to achieve in-memory learning with RRAM crosspoint array. The programming circuitry is designed and simulated in TSMC 65nm technology showing 900X speedup for the dictionary learning task compared to the CPU performance. From the algorithm perspective, inspired by the high accuracy and low power of the brain, this dissertation proposes a bio-plausible feedforward inhibition spiking neural network with Spike-Rate-Dependent-Plasticity (SRDP) learning rule. It achieves more than 95% accuracy on the MNIST dataset, which is comparable to the sparse coding algorithm, but requires far fewer number of computations. The role of inhibition in this network is systematically studied and shown to improve the hardware efficiency in learning.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    The Fuzziness in Molecular, Supramolecular, and Systems Chemistry

    Get PDF
    Fuzzy Logic is a good model for the human ability to compute words. It is based on the theory of fuzzy set. A fuzzy set is different from a classical set because it breaks the Law of the Excluded Middle. In fact, an item may belong to a fuzzy set and its complement at the same time and with the same or different degree of membership. The degree of membership of an item in a fuzzy set can be any real number included between 0 and 1. This property enables us to deal with all those statements of which truths are a matter of degree. Fuzzy logic plays a relevant role in the field of Artificial Intelligence because it enables decision-making in complex situations, where there are many intertwined variables involved. Traditionally, fuzzy logic is implemented through software on a computer or, even better, through analog electronic circuits. Recently, the idea of using molecules and chemical reactions to process fuzzy logic has been promoted. In fact, the molecular word is fuzzy in its essence. The overlapping of quantum states, on the one hand, and the conformational heterogeneity of large molecules, on the other, enable context-specific functions to emerge in response to changing environmental conditions. Moreover, analog input–output relationships, involving not only electrical but also other physical and chemical variables can be exploited to build fuzzy logic systems. The development of “fuzzy chemical systems” is tracing a new path in the field of artificial intelligence. This new path shows that artificially intelligent systems can be implemented not only through software and electronic circuits but also through solutions of properly chosen chemical compounds. The design of chemical artificial intelligent systems and chemical robots promises to have a significant impact on science, medicine, economy, security, and wellbeing. Therefore, it is my great pleasure to announce a Special Issue of Molecules entitled “The Fuzziness in Molecular, Supramolecular, and Systems Chemistry.” All researchers who experience the Fuzziness of the molecular world or use Fuzzy logic to understand Chemical Complex Systems will be interested in this book

    Neuromorphic Architecture With 1M Memristive Synapses for Detection of Weakly Correlated Inputs

    No full text

    Computational Design of Nanomaterials

    Get PDF
    The development of materials with tailored functionalities and with continuously shrinking linear dimensions towards (and below) the nanoscale is not only going to revolutionize state of the art fabrication technologies, but also the computational methodologies used to model the materials properties. Specifically, atomistic methodologies are becoming increasingly relevant in the field of materials science as a fundamental tool in gaining understanding on as well as for pre-designing (in silico material design) the behavior of nanoscale materials in response to external stimuli. The major long-term goal of atomistic modelling is to obtain structure-function relationships at the nanoscale, i.e. to correlate a definite response of a given physical system with its specific atomic conformation and ultimately, with its chemical composition and electronic structure. This has clearly its pendant in the development of bottom-up fabrication technologies, which also require a detailed control and fine tuning of physical and chemical properties at sub-nanometer and nanometer length scales. The current work provides an overview of different applications of atomistic approaches to the study of nanoscale materials. We illustrate how the use of first-principle based electronic structure methodologies, quantum mechanical based molecular dynamics, and appropriate methods to model the electrical and thermal response of nanoscale materials, provides a solid starting point to shed light on the way such systems can be manipulated to control their electrical, mechanical, or thermal behavior. Thus, some typical topics addressed here include the interplay between mechanical and electronic degrees of freedom in carbon based nanoscale materials with potential relevance for designing nanoscale switches, thermoelectric properties at the single-molecule level and their control via specific chemical functionalization, and electrical and spin-dependent properties in biomaterials. We will further show how phenomenological models can be efficiently applied to get a first insight in the behavior of complex nanoscale systems, for which first principle electronic structure calculations become computationally expensive. This will become especially clear in the case of biomolecular systems and organic semiconductors

    Computational Design of Nanomaterials

    Get PDF
    The development of materials with tailored functionalities and with continuously shrinking linear dimensions towards (and below) the nanoscale is not only going to revolutionize state of the art fabrication technologies, but also the computational methodologies used to model the materials properties. Specifically, atomistic methodologies are becoming increasingly relevant in the field of materials science as a fundamental tool in gaining understanding on as well as for pre-designing (in silico material design) the behavior of nanoscale materials in response to external stimuli. The major long-term goal of atomistic modelling is to obtain structure-function relationships at the nanoscale, i.e. to correlate a definite response of a given physical system with its specific atomic conformation and ultimately, with its chemical composition and electronic structure. This has clearly its pendant in the development of bottom-up fabrication technologies, which also require a detailed control and fine tuning of physical and chemical properties at sub-nanometer and nanometer length scales. The current work provides an overview of different applications of atomistic approaches to the study of nanoscale materials. We illustrate how the use of first-principle based electronic structure methodologies, quantum mechanical based molecular dynamics, and appropriate methods to model the electrical and thermal response of nanoscale materials, provides a solid starting point to shed light on the way such systems can be manipulated to control their electrical, mechanical, or thermal behavior. Thus, some typical topics addressed here include the interplay between mechanical and electronic degrees of freedom in carbon based nanoscale materials with potential relevance for designing nanoscale switches, thermoelectric properties at the single-molecule level and their control via specific chemical functionalization, and electrical and spin-dependent properties in biomaterials. We will further show how phenomenological models can be efficiently applied to get a first insight in the behavior of complex nanoscale systems, for which first principle electronic structure calculations become computationally expensive. This will become especially clear in the case of biomolecular systems and organic semiconductors
    corecore