7 research outputs found

    Machine Learning-powered Compact Modeling of Stochastic Electronic Devices using Mixture Density Networks

    Full text link
    The relentless pursuit of miniaturization and performance enhancement in electronic devices has led to a fundamental challenge in the field of circuit design and simulation: how to accurately account for the inherent stochastic nature of certain devices. While conventional deterministic models have served as indispensable tools for circuit designers, they fall short when it comes to capture the subtle yet critical variability exhibited by many electronic components. In this paper, we present an innovative approach that transcends the limitations of traditional modeling techniques by harnessing the power of machine learning, specifically Mixture Density Networks (MDNs), to faithfully represent and simulate the stochastic behavior of electronic devices. We demonstrate our approach to model heater cryotrons, where the model is able to capture the stochastic switching dynamics observed in the experiment. Our model shows 0.82% mean absolute error for switching probability. This paper marks a significant step forward in the quest for accurate and versatile compact models, poised to drive innovation in the realm of electronic circuits

    Stochastic computing system hardware design for convolutional neural networks optimized for accuracy area and energy efficiency

    Get PDF
    Stochastic computing (SC) is an alternative computing paradigm that can lead to designs that oļ¬€er lower area and power consumption compared to that of the conventional binary-encoded (BE) deterministic computing. In SC, numbers are encoded as a bit-stream of ā€˜0ā€™s and ā€˜1ā€™s, where SC computation elements (or functions) operate on one or more bit-streams. To obtain accurate results, some functions require the bit-streams to be correlated, while others require uncorrelated bit-streams or a combination of both. The relationship between SC function accuracy and correlation is not well studied in previous works. Thus, managing the correlation across the SC system is a key challenge in the eļ¬€ort to achieve optimum accuracy. In addition, to perform SC computation, the input values are converted from BE domain to SC; then on the completion of the computation, back to BE to obtain the results. The conversion processes require circuitry that typically consume over 80% of the overall SC system area, hence this is another key challenge of the problem. To address the above mentioned challenges, this thesis proposes a framework of an end-to-end system design optimized for accuracy and area. The framework provides guidelines to design an eļ¬€ective SC function or system that exploit correlation. This framework is applied in designing the SC functional units and the complete SC system for convolutional neural network (CNN), which is the dominant approach in the implementation of recognition systems. This thesis shows that although CNN is a compute-intensive and resource-demanding algorithm, through the proposed SC design framework, it is possible to implement CNN in an embedded system with limited area and power budget. Several novel SC- based functions are proposed that outperform previous works and obtain signiļ¬cant area savings and high accuracy to replace the BE equivalent functions. These functions include inner product, max pooling, ReLU activation function, and average pooling. Then, some training considerations are speciļ¬ed to enable achieving low error rates for SC-based CNN. Experimental results show that the SC-based CNN attained no or minor accuracy degradation compared to BE counterpart. SC-based CNN achieves 99.6% and 96.25% classiļ¬cation accuracy using MNIST digit classiļ¬cation and AT&T face recognition datasets, respectively. Moreover, the SC-based CNN of ResNet-20 model achieves 86.5% classiļ¬cation accuracy using CIFAR-10 object dataset. To rapidly map an SC system into FPGA, a generic design strategy for high-level synthesis of SC computation engines is proposed. The SC-based CNN hardware on FPGA obtains the lowest resource utilization compared to previous works on FPGA-based CNN accelerators. In addition, the proposed hardware architecture achieves 277.46 GOP/s/W energy eļ¬ƒciency, which outperforms previous works

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    High-Density Neurochemical Microelectrode Array to Monitor Neurotransmitter Secretion

    Get PDF
    Neuronal exocytosis facilitates the propagation of information through the nervous system pertaining to bodily function, memory, and emotions. Using amperometry, an electrochemical technique that directly detects electroactive molecules, the sub-millisecond dynamics of exocytosis are revealed and the modulation of neurotransmitter secretion due to neurodegenerative diseases or pharmacological treatments can be studied. The method of detection using amperometry is the exchange of electrons due to a redox reaction at an electrochemically sensitive electrode. As electroactive molecules, such as dopamine, undergo oxidation, electrons are released from the molecule to the electrode and an oxidation current is generated and recorded. Despite the significance of traditional single-cell amperometry, it is a costly, labor-intensive, and low-throughput, procedure. The focus of this dissertation is the development of a monolithic CMOS-based neurochemical sensing system that can provide a high-throughput of up to 1024 single-cell recordings in a single experiment, significantly reducing the number of experiments required for studying the effects of neurodegenerative diseases or new pharmacological treatments on the exocytosis process. The neurochemical detection system detailed in this dissertation is based on a CMOS amplifier array that contains 1024 independent electrode-amplifier units, each of which contains a transimpedance amplifier with comparable noise performance to a high-quality electrophysiology amplifier that is used for traditional single-cell amperometry. Using this novel technology, single exocytosis events are monitored simultaneously from numerous single-cells in experiments to reveal the secretion characteristics from groups of cells before and after pharmacological treatments which target the modulation of neurotransmitters in the brain, such as drugs for depression or Parkinson\u27s disease

    Survey of Stochastic-Based Computation Paradigms

    No full text
    corecore