231,694 research outputs found

    Architecture and Advanced Electronics Pathways Toward Highly Adaptive Energy- Efficient Computing

    Get PDF
    With the explosion of the number of compute nodes, the bottleneck of future computing systems lies in the network architecture connecting the nodes. Addressing the bottleneck requires replacing current backplane-based network topologies. We propose to revolutionize computing electronics by realizing embedded optical waveguides for onboard networking and wireless chip-to-chip links at 200-GHz carrier frequency connecting neighboring boards in a rack. The control of novel rate-adaptive optical and mm-wave transceivers needs tight interlinking with the system software for runtime resource management

    Toward an Energy Efficient Language and Compiler for (Partially) Reversible Algorithms

    Full text link
    We introduce a new programming language for expressing reversibility, Energy-Efficient Language (Eel), geared toward algorithm design and implementation. Eel is the first language to take advantage of a partially reversible computation model, where programs can be composed of both reversible and irreversible operations. In this model, irreversible operations cost energy for every bit of information created or destroyed. To handle programs of varying degrees of reversibility, Eel supports a log stack to automatically trade energy costs for space costs, and introduces many powerful control logic operators including protected conditional, general conditional, protected loops, and general loops. In this paper, we present the design and compiler for the three language levels of Eel along with an interpreter to simulate and annotate incurred energy costs of a program.Comment: 17 pages, 0 additional figures, pre-print to be published in The 8th Conference on Reversible Computing (RC2016

    Filament‐Free Bulk Resistive Memory Enables Deterministic Analogue Switching

    Full text link
    Digital computing is nearing its physical limits as computing needs and energy consumption rapidly increase. Analogue‐memory‐based neuromorphic computing can be orders of magnitude more energy efficient at data‐intensive tasks like deep neural networks, but has been limited by the inaccurate and unpredictable switching of analogue resistive memory. Filamentary resistive random access memory (RRAM) suffers from stochastic switching due to the random kinetic motion of discrete defects in the nanometer‐sized filament. In this work, this stochasticity is overcome by incorporating a solid electrolyte interlayer, in this case, yttria‐stabilized zirconia (YSZ), toward eliminating filaments. Filament‐free, bulk‐RRAM cells instead store analogue states using the bulk point defect concentration, yielding predictable switching because the statistical ensemble behavior of oxygen vacancy defects is deterministic even when individual defects are stochastic. Both experiments and modeling show bulk‐RRAM devices using TiO2‐X switching layers and YSZ electrolytes yield deterministic and linear analogue switching for efficient inference and training. Bulk‐RRAM solves many outstanding issues with memristor unpredictability that have inhibited commercialization, and can, therefore, enable unprecedented new applications for energy‐efficient neuromorphic computing. Beyond RRAM, this work shows how harnessing bulk point defects in ionic materials can be used to engineer deterministic nanoelectronic materials and devices.A resistive memory cell based on the electrochemical migration of oxygen vacancies for in‐memory neuromorphic computing is presented. By using the average statistical behavior of all oxygen vacancies to store analogue information states, this cell overcomes the stochastic and unpredictable switching plaguing filament‐forming memristors, and instead achieves linear, predictable, and deterministic switching.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/163547/3/adma202003984_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/163547/2/adma202003984-sup-0001-SuppMat.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/163547/1/adma202003984.pd

    Hardware-algorithm collaborative computing with photonic spiking neuron chip based on integrated Fabry-P\'erot laser with saturable absorber

    Full text link
    Photonic neuromorphic computing has emerged as a promising avenue toward building a low-latency and energy-efficient non-von-Neuman computing system. Photonic spiking neural network (PSNN) exploits brain-like spatiotemporal processing to realize high-performance neuromorphic computing. However, the nonlinear computation of PSNN remains a significant challenging. Here, we proposed and fabricated a photonic spiking neuron chip based on an integrated Fabry-P\'erot laser with a saturable absorber (FP-SA) for the first time. The nonlinear neuron-like dynamics including temporal integration, threshold and spike generation, refractory period, and cascadability were experimentally demonstrated, which offers an indispensable fundamental building block to construct the PSNN hardware. Furthermore, we proposed time-multiplexed spike encoding to realize functional PSNN far beyond the hardware integration scale limit. PSNNs with single/cascaded photonic spiking neurons were experimentally demonstrated to realize hardware-algorithm collaborative computing, showing capability in performing classification tasks with supervised learning algorithm, which paves the way for multi-layer PSNN for solving complex tasks.Comment: 10 pages, 8 figure

    In materia implementation strategies of physical reservoir computing with memristive nanonetworks

    Get PDF
    Physical reservoir computing (RC) represents a computational framework that exploits information-processing capabilities of programmable matter, allowing the realization of energy-efficient neuromorphic hardware with fast learning and low training cost. Despite self-organized memristive networks have been demonstrated as physical reservoir able to extract relevant features from spatiotemporal input signals, multiterminal nanonetworks open the possibility for novel strategies of computing implementation. In this work, we report on implementation strategies of in materia RC with self-assembled memristive networks. Besides showing the spatiotemporal information processing capabilities of self-organized nanowire networks, we show through simulations that the emergent collective dynamics allows unconventional implementations of RC where the same electrodes can be used as both reservoir inputs and outputs. By comparing different implementation strategies on a digit recognition task, simulations show that the unconventional implementation allows a reduction of the hardware complexity without limiting computing capabilities, thus providing new insights for taking full advantage of in materia computing toward a rational design of neuromorphic systems

    High-Performance and Energy-Efficient Leaky Integrate-and-Fire Neuron and Spike Timing-Dependent Plasticity Circuits in 7nm FinFET Technology

    Get PDF
    In designing neuromorphic circuits and systems, developing compact and energy-efficient neuron and synapse circuits is essential for high-performance on-chip neural architectures. Toward that end, this work utilizes the advanced low-power and compact 7nm FinFET technology to design leaky integrate-and-fire (LIF) neuron and spike-timing-dependent plasticity (STDP) circuits. In the proposed STDP circuit, only six FinFETs and three small capacitors (two 10fF and 20fF) have been utilized to realize STDP learning. Moreover, 12 transistors and two capacitors (20fF) have been employed for designing the LIF neuron circuit. The evaluation results demonstrate that besides 60% area saving, the proposed STDP circuit achieves 68% improvement in total average power consumption and 43% lower energy dissipation compared to previous works. The proposed LIF neuron circuit demonstrates 34% area saving, 46% power, and 40% energy saving compared to its counterparts. The neuron can also tune the firing frequency within 5MHz-330MHz using an external control voltage. These results emphasize the potential of the proposed neuron and STDP learning circuits for compact and energy-efficient neuromorphic computing systems

    Graphene-PLA (GPLA): A compact and ultra-low power logic array architecture

    Get PDF
    The key characteristics of the next generation of ICs for wearable applications include high integration density, small area, low power consumption, high energy-efficiency, reliability and enhanced mechanical properties like stretchability and transparency. The proper mix of new materials and novel integration strategies is the enabling factor to achieve those design specifications. Moving toward this goal, we introduce a graphene-based regular logic-array structure for energy efficient digital computing. It consists of graphene p-n junctions arranged into a regular mesh. The obtained structure resembles that of Programmable Logic Arrays (PLAs), hence the name Graphene-PLAs (GPLAs); the high expressive power of graphene p-n junctions and their resistive nature enables the implementation of ultra-low power adiabatic logic circuits
    • 

    corecore