408 research outputs found

    Flexible Neural Electrode Array Based-on Porous Graphene for Cortical Microstimulation and Sensing.

    Get PDF
    Neural sensing and stimulation have been the backbone of neuroscience research, brain-machine interfaces and clinical neuromodulation therapies for decades. To-date, most of the neural stimulation systems have relied on sharp metal microelectrodes with poor electrochemical properties that induce extensive damage to the tissue and significantly degrade the long-term stability of implantable systems. Here, we demonstrate a flexible cortical microelectrode array based on porous graphene, which is capable of efficient electrophysiological sensing and stimulation from the brain surface, without penetrating into the tissue. Porous graphene electrodes show superior impedance and charge injection characteristics making them ideal for high efficiency cortical sensing and stimulation. They exhibit no physical delamination or degradation even after 1 million biphasic stimulation cycles, confirming high endurance. In in vivo experiments with rodents, same array is used to sense brain activity patterns with high spatio-temporal resolution and to control leg muscles with high-precision electrical stimulation from the cortical surface. Flexible porous graphene array offers a minimally invasive but high efficiency neuromodulation scheme with potential applications in cortical mapping, brain-computer interfaces, treatment of neurological disorders, where high resolution and simultaneous recording and stimulation of neural activity are crucial

    Neuroinspired unsupervised learning and pruning with subquantum CBRAM arrays.

    Get PDF
    Resistive RAM crossbar arrays offer an attractive solution to minimize off-chip data transfer and parallelize on-chip computations for neural networks. Here, we report a hardware/software co-design approach based on low energy subquantum conductive bridging RAM (CBRAM®) devices and a network pruning technique to reduce network level energy consumption. First, we demonstrate low energy subquantum CBRAM devices exhibiting gradual switching characteristics important for implementing weight updates in hardware during unsupervised learning. Then we develop a network pruning algorithm that can be employed during training, different from previous network pruning approaches applied for inference only. Using a 512 kbit subquantum CBRAM array, we experimentally demonstrate high recognition accuracy on the MNIST dataset for digital implementation of unsupervised learning. Our hardware/software co-design approach can pave the way towards resistive memory based neuro-inspired systems that can autonomously learn and process information in power-limited settings

    A memristive nanoparticle/organic hybrid synapstor for neuro-inspired computing

    Full text link
    A large effort is devoted to the research of new computing paradigms associated to innovative nanotechnologies that should complement and/or propose alternative solutions to the classical Von Neumann/CMOS association. Among various propositions, Spiking Neural Network (SNN) seems a valid candidate. (i) In terms of functions, SNN using relative spike timing for information coding are deemed to be the most effective at taking inspiration from the brain to allow fast and efficient processing of information for complex tasks in recognition or classification. (ii) In terms of technology, SNN may be able to benefit the most from nanodevices, because SNN architectures are intrinsically tolerant to defective devices and performance variability. Here we demonstrate Spike-Timing-Dependent Plasticity (STDP), a basic and primordial learning function in the brain, with a new class of synapstor (synapse-transistor), called Nanoparticle Organic Memory Field Effect Transistor (NOMFET). We show that this learning function is obtained with a simple hybrid material made of the self-assembly of gold nanoparticles and organic semiconductor thin films. Beyond mimicking biological synapses, we also demonstrate how the shape of the applied spikes can tailor the STDP learning function. Moreover, the experiments and modeling show that this synapstor is a memristive device. Finally, these synapstors are successfully coupled with a CMOS platform emulating the pre- and post-synaptic neurons, and a behavioral macro-model is developed on usual device simulator.Comment: A single pdf file, with the full paper and the supplementary information; Adv. Func. Mater., on line Dec. 13 (2011

    First-Principles Study for Evidence of Low Interface Defect Density at Ge/GeO2_2 Interfaces

    Full text link
    We present the evidence of the low defect density at Ge/GeO2_2 interfaces in terms of first-principles total energy calculations. The energy advantages of the atom emission from the Ge/GeO2_2 interface to release the stress due to the lattice mismatch are compared with those from the Si/SiO2_2 interface. The energy advantages of the Ge/GeO2_2 are found to be smaller than those of the Si/SiO2_2 because of the high flexibility of the bonding networks in GeO2_2. Thus, the suppression of the Ge-atom emission during the oxidation process leads to the improved electrical properties of the Ge/GeO2_2 interfaces

    Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array

    Get PDF
    Recent advances in neuroscience together with nanoscale electronic device technology have resulted in huge interests in realizing brain-like computing hardwares using emerging nanoscale memory devices as synaptic elements. Although there has been experimental work that demonstrated the operation of nanoscale synaptic element at the single device level, network level studies have been limited to simulations. In this work, we demonstrate, using experiments, array level associative learning using phase change synaptic devices connected in a grid like configuration similar to the organization of the biological brain. Implementing Hebbian learning with phase change memory cells, the synaptic grid was able to store presented patterns and recall missing patterns in an associative brain-like fashion. We found that the system is robust to device variations, and large variations in cell resistance states can be accommodated by increasing the number of training epochs. We illustrated the tradeoff between variation tolerance of the network and the overall energy consumption, and found that energy consumption is decreased significantly for lower variation tolerance.Comment: Original article can be found here: http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00205/abstrac

    Multistate resistive switching memory for synaptic memory applications

    Get PDF
    Reproducible low bias bipolar resistive switching memory in HfZnOx based memristors is reported. The modification of the concentration of oxygen vacancies in the ternary oxide film, which is facilitated by adding ZnO into HfO2, results in improved memory operation by the ternary oxide compared to the single binary oxides. A controlled multistate memory operation is achieved by controlling current compliance and RESET stop voltages. A high DC cyclic stability up to 400 cycles in the multistate memory performance is observed. Conventional synaptic operation in terms of potentiation, depression plasticity, and Ebbinghaus forgetting process are also studied. The memory mechanism is shown to originate from the migration of the oxygen vacancies and modulation of the interfacial layers

    A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications

    Get PDF
    Inspired from the computational efficiency of the biological brain, spiking neural networks (SNNs) emulate biological neural networks, neural codes, dynamics, and circuitry. SNNs show great potential for the implementation of unsupervised learning using in-memory computing. Here, we report an algorithmic optimization that improves energy efficiency of online learning with SNNs on emerging non-volatile memory (eNVM) devices. We develop a pruning method for SNNs by exploiting the output firing characteristics of neurons. Our pruning method can be applied during network training, which is different from previous approaches in the literature that employ pruning on already-trained networks. This approach prevents unnecessary updates of network parameters during training. This algorithmic optimization can complement the energy efficiency of eNVM technology, which offers a unique in-memory computing platform for the parallelization of neural network operations. Our SNN maintains ~90% classification accuracy on the MNIST dataset with up to ~75% pruning, significantly reducing the number of weight updates. The SNN and pruning scheme developed in this work can pave the way toward applications of eNVM based neuro-inspired systems for energy efficient online learning in low power applications
    corecore