163,999 research outputs found
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions
A fundamental aspect of learning in biological neural networks is the
plasticity property which allows them to modify their configurations during
their lifetime. Hebbian learning is a biologically plausible mechanism for
modeling the plasticity property in artificial neural networks (ANNs), based on
the local interactions of neurons. However, the emergence of a coherent global
learning behavior from local Hebbian plasticity rules is not very well
understood. The goal of this work is to discover interpretable local Hebbian
learning rules that can provide autonomous global learning. To achieve this, we
use a discrete representation to encode the learning rules in a finite search
space. These rules are then used to perform synaptic changes, based on the
local interactions of the neurons. We employ genetic algorithms to optimize
these rules to allow learning on two separate tasks (a foraging and a
prey-predator scenario) in online lifetime learning settings. The resulting
evolved rules converged into a set of well-defined interpretable types, that
are thoroughly discussed. Notably, the performance of these rules, while
adapting the ANNs during the learning tasks, is comparable to that of offline
learning methods such as hill climbing.Comment: Evolutionary Computation Journa
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors
© 2016 Cheung, Schultz and Luk.NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation
- …