667 research outputs found

    Combinatorial optimization solving by coherent Ising machines based on spiking neural networks

    Full text link
    Spiking neural network is a kind of neuromorphic computing that is believed to improve the level of intelligence and provide advantages for quantum computing. In this work, we address this issue by designing an optical spiking neural network and find that it can be used to accelerate the speed of computation, especially on combinatorial optimization problems. Here the spiking neural network is constructed by the antisymmetrically coupled degenerate optical parametric oscillator pulses and dissipative pulses. A nonlinear transfer function is chosen to mitigate amplitude inhomogeneities and destabilize the resulting local minima according to the dynamical behavior of spiking neurons. It is numerically shown that the spiking neural network-coherent Ising machines have excellent performance on combinatorial optimization problems, which is expected to offer new applications for neural computing and optical computing.Comment: 10 pages, 5 figures, accepted by Quantu

    GenNet: A Platform for Hybrid Network Experiments

    Get PDF
    We describe General Network (GenNet), a software plugin for the real time experimental interface (RTXI) dynamic clamp system that allows for straightforward and flexible implementation of hybrid network experiments. This extension to RTXI allows for hybrid networks that contain an arbitrary number of simulated and real neurons, significantly improving upon previous solutions that were limited, particularly by the number of cells supported. The benefits of this system include the ability to rapidly and easily set up and perform scalable experiments with hybrid networks and the ability to scan through ranges of parameters. We present instructions for installing, running and using GenNet for hybrid network experiments and provide several example uses of the system

    when channels cooperate or capacitance varies

    Get PDF
    Die elektrische Signalverarbeitung in Nervenzellen basiert auf deren erregbarer Zellmembran. Üblicherweise wird angenommen, dass die in der Membran eingebetteten leitfĂ€higen IonenkanĂ€le nicht auf direkte Art gekoppelt sind und dass die KapazitĂ€t des von der Membran gebildeten Kondensators konstant ist. Allerdings scheinen diese Annahmen nicht fĂŒr alle Nervenzellen zu gelten. Im Gegenteil, verschiedene IonenkanĂ€le “kooperieren” und auch die Vorstellung von einer konstanten spezifischen MembrankapazitĂ€t wurde kĂŒrzlich in Frage gestellt. Die Auswirkungen dieser Abweichungen auf die elektrischen Eigenschaften von Nervenzellen ist das Thema der folgenden kumulativen Dissertationsschrift. Im ersten Projekt wird gezeigt, auf welche Weise stark kooperative spannungsabhĂ€ngige IonenkanĂ€le eine Form von zellulĂ€rem Kurzzeitspeicher fĂŒr elektrische AktivitĂ€t bilden könnten. Solche kooperativen KanĂ€le treten in der Membran hĂ€ufig in kleinen rĂ€umlich getrennte Clustern auf. Basierend auf einem mathematischen Modell wird nachgewiesen, dass solche Kanalcluster als eine bistabile LeitfĂ€higkeit agieren. Die dadurch entstehende große SpeicherkapazitĂ€t eines Ensembles dieser Kanalcluster könnte von Nervenzellen fĂŒr stufenloses persistentes Feuern genutzt werden -- ein Feuerverhalten von Nutzen fĂŒr das KurzzeichgedĂ€chtnis. Im zweiten Projekt wird ein neues Dynamic Clamp Protokoll entwickelt, der Capacitance Clamp, das erlaubt, Änderungen der MembrankapazitĂ€t in biologischen Nervenzellen zu emulieren. Eine solche experimentelle Möglichkeit, um systematisch die Rolle der KapazitĂ€t zu untersuchen, gab es bisher nicht. Nach einer Reihe von Tests in Simulationen und Experimenten wurde die Technik mit Körnerzellen des *Gyrus dentatus* genutzt, um den Einfluss von KapazitĂ€t auf deren Feuerverhalten zu studieren. Die Kombination beider Projekte zeigt die Relevanz dieser oft vernachlĂ€ssigten Facetten von neuronalen Membranen fĂŒr die Signalverarbeitung in Nervenzellen.Electrical signaling in neurons is shaped by their specialized excitable cell membranes. Commonly, it is assumed that the ion channels embedded in the membrane gate independently and that the electrical capacitance of neurons is constant. However, not all excitable membranes appear to adhere to these assumptions. On the contrary, ion channels are observed to gate cooperatively in several circumstances and also the notion of one fixed value for the specific membrane capacitance (per unit area) across neuronal membranes has been challenged recently. How these deviations from the original form of conductance-based neuron models affect their electrical properties has not been extensively explored and is the focus of this cumulative thesis. In the first project, strongly cooperative voltage-gated ion channels are proposed to provide a membrane potential-based mechanism for cellular short-term memory. Based on a mathematical model of cooperative gating, it is shown that coupled channels assembled into small clusters act as an ensemble of bistable conductances. The correspondingly large memory capacity of such an ensemble yields an alternative explanation for graded forms of cell-autonomous persistent firing – an observed firing mode implicated in working memory. In the second project, a novel dynamic clamp protocol -- the capacitance clamp -- is developed to artificially modify capacitance in biological neurons. Experimental means to systematically investigate capacitance, a basic parameter shared by all excitable cells, had previously been missing. The technique, thoroughly tested in simulations and experiments, is used to monitor how capacitance affects temporal integration and energetic costs of spiking in dentate gyrus granule cells. Combined, the projects identify computationally relevant consequences of these often neglected facets of neuronal membranes and extend the modeling and experimental techniques to further study them

    Detecting partial synchrony in a complex oscillatory network using pseudo-vortices

    Full text link
    Partial synchronization is characteristic phase dynamics of coupled oscillators on various natural and artificial networks, which can remain undetected due to the complexity of the systems. With an analogy between pairwise asynchrony of oscillators and topological defects, i.e., vortices, in the two-dimensional XY spin model, we propose a robust and data-driven method to identify the partial synchronization on complex networks. The proposed method is based on an integer matrix whose element is pseudo-vorticity that discretely quantifies asynchronous phase dynamics in every two oscillators, which results in graphical and entropic representations of partial synchrony. As a first trial, we apply our method to 200 FitzHugh-Nagumo neurons on a complex small-world network. Partially synchronized chimera states are revealed by discriminating synchronized states even with phase lags. Such phase lags also appear in partial synchronization in chimera states. Our topological, graphical, and entropic method is implemented solely with measurable phase dynamics data, which will lead to a straightforward application to general oscillatory networks including neural networks in the brain.Comment: 9 pages, 5 figure

    Analog Spiking Neuromorphic Circuits and Systems for Brain- and Nanotechnology-Inspired Cognitive Computing

    Get PDF
    Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system. This dissertation describes novel hardware architectures and neuron circuit designs that synergistically assemble the fundamental and significant elements for brain-inspired computing. Versatile CMOS spiking neurons that combine integrate-and-fire, passive dense RRAM synapses drive capability, dynamic biasing for adaptive power consumption, in situ spike-timing dependent plasticity (STDP) and competitive learning in compact integrated circuit modules are presented. Real-world pattern learning and recognition tasks using the proposed architecture were demonstrated with circuit-level simulations. A test chip was implemented and fabricated to verify the proposed CMOS neuron and hardware architecture, and the subsequent chip measurement results successfully proved the idea. The work described in this dissertation realizes a key building block for large-scale integration of spiking neural network hardware, and then, serves as a step-stone for the building of next-generation energy-efficient brain-inspired cognitive computing systems
    • 

    corecore