6 research outputs found

    Delay dynamics of neuromorphic optoelectronic nanoscale resonators: Perspectives and applications

    Get PDF
    With the recent exponential growth of applications using artificial intelligence (AI), the development of efficient and ultrafast brain-like (neuromorphic) systems is crucial for future information and communication technologies. While the implementation of AI systems using computer algorithms of neural networks is emerging rapidly, scientists are just taking the very first steps in the development of the hardware elements of an artificial brain, specifically neuromorphic microchips. In this review article, we present the current state of the art of neuromorphic photonic circuits based on solid-state optoelectronic oscillators formed by nanoscale double barrier quantum well resonant tunneling diodes. We address, both experimentally and theoretically, the key dynamic properties of recently developed artificial solid-state neuron microchips with delayed perturbations and describe their role in the study of neural activity and regenerative memory. This review covers our recent research work on excitable and delay dynamic characteristics of both single and autaptic (delayed) artificial neurons including all-or-none response, spike-based data encoding, storage, signal regeneration and signal healing. Furthermore, the neural responses of these neuromorphic microchips display all the signatures of extended spatio-temporal localized structures (LSs) of light, which are reviewed here in detail. By taking advantage of the dissipative nature of LSs, we demonstrate potential applications in optical data reconfiguration and clock and timing at high-speeds and with short transients. The results reviewed in this article are a key enabler for the development of high-performance optoelectronic devices in future high-speed brain-inspired optical memories and neuromorphic computing. (C) 2017 Author(s).Fundacao para a Ciencia e a Tecnologia (FCT) [UID/Multi/00631/2013]European Structural and Investment Funds (FEEI) through the Competitiveness and Internationalization Operational Program - COMPETE 2020National Funds through FCT [ALG-01-0145-FEDER-016432/POCI-01-0145-FEDER-016432]European Commission under the project iBROW [645369]project COMBINA [TEC2015-65212-C3-3-PAEI/FEDER UE]Ramon y Cajal fellowshipinfo:eu-repo/semantics/publishedVersio

    MOCAST 2021

    Get PDF
    The 10th International Conference on Modern Circuit and System Technologies on Electronics and Communications (MOCAST 2021) will take place in Thessaloniki, Greece, from July 5th to July 7th, 2021. The MOCAST technical program includes all aspects of circuit and system technologies, from modeling to design, verification, implementation, and application. This Special Issue presents extended versions of top-ranking papers in the conference. The topics of MOCAST include:Analog/RF and mixed signal circuits;Digital circuits and systems design;Nonlinear circuits and systems;Device and circuit modeling;High-performance embedded systems;Systems and applications;Sensors and systems;Machine learning and AI applications;Communication; Network systems;Power management;Imagers, MEMS, medical, and displays;Radiation front ends (nuclear and space application);Education in circuits, systems, and communications

    Classification using Dopant Network Processing Units

    Get PDF

    On-Chip Learning and Inference Acceleration of Sparse Representations

    Get PDF
    abstract: The past decade has seen a tremendous surge in running machine learning (ML) functions on mobile devices, from mere novelty applications to now indispensable features for the next generation of devices. While the mobile platform capabilities range widely, long battery life and reliability are common design concerns that are crucial to remain competitive. Consequently, state-of-the-art mobile platforms have become highly heterogeneous by combining a powerful CPUs with GPUs to accelerate the computation of deep neural networks (DNNs), which are the most common structures to perform ML operations. But traditional von Neumann architectures are not optimized for the high memory bandwidth and massively parallel computation demands required by DNNs. Hence, propelling research into non-von Neumann architectures to support the demands of DNNs. The re-imagining of computer architectures to perform efficient DNN computations requires focusing on the prohibitive demands presented by DNNs and alleviating them. The two central challenges for efficient computation are (1) large memory storage and movement due to weights of the DNN and (2) massively parallel multiplications to compute the DNN output. Introducing sparsity into the DNNs, where certain percentage of either the weights or the outputs of the DNN are zero, greatly helps with both challenges. This along with algorithm-hardware co-design to compress the DNNs is demonstrated to provide efficient solutions to greatly reduce the power consumption of hardware that compute DNNs. Additionally, exploring emerging technologies such as non-volatile memories and 3-D stacking of silicon in conjunction with algorithm-hardware co-design architectures will pave the way for the next generation of mobile devices. Towards the objectives stated above, our specific contributions include (a) an architecture based on resistive crosspoint array that can update all values stored and compute matrix vector multiplication in parallel within a single cycle, (b) a framework of training DNNs with a block-wise sparsity to drastically reduce memory storage and total number of computations required to compute the output of DNNs, (c) the exploration of hardware implementations of sparse DNNs and architectural guidelines to reduce power consumption for the implementations in monolithic 3D integrated circuits, and (d) a prototype chip in 65nm CMOS accelerator for long-short term memory networks trained with the proposed block-wise sparsity scheme.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    The Fuzziness in Molecular, Supramolecular, and Systems Chemistry

    Get PDF
    Fuzzy Logic is a good model for the human ability to compute words. It is based on the theory of fuzzy set. A fuzzy set is different from a classical set because it breaks the Law of the Excluded Middle. In fact, an item may belong to a fuzzy set and its complement at the same time and with the same or different degree of membership. The degree of membership of an item in a fuzzy set can be any real number included between 0 and 1. This property enables us to deal with all those statements of which truths are a matter of degree. Fuzzy logic plays a relevant role in the field of Artificial Intelligence because it enables decision-making in complex situations, where there are many intertwined variables involved. Traditionally, fuzzy logic is implemented through software on a computer or, even better, through analog electronic circuits. Recently, the idea of using molecules and chemical reactions to process fuzzy logic has been promoted. In fact, the molecular word is fuzzy in its essence. The overlapping of quantum states, on the one hand, and the conformational heterogeneity of large molecules, on the other, enable context-specific functions to emerge in response to changing environmental conditions. Moreover, analog input–output relationships, involving not only electrical but also other physical and chemical variables can be exploited to build fuzzy logic systems. The development of “fuzzy chemical systems” is tracing a new path in the field of artificial intelligence. This new path shows that artificially intelligent systems can be implemented not only through software and electronic circuits but also through solutions of properly chosen chemical compounds. The design of chemical artificial intelligent systems and chemical robots promises to have a significant impact on science, medicine, economy, security, and wellbeing. Therefore, it is my great pleasure to announce a Special Issue of Molecules entitled “The Fuzziness in Molecular, Supramolecular, and Systems Chemistry.” All researchers who experience the Fuzziness of the molecular world or use Fuzzy logic to understand Chemical Complex Systems will be interested in this book
    corecore