50,764 research outputs found
Evolution and Analysis of Embodied Spiking Neural Networks Reveals Task-Specific Clusters of Effective Networks
Elucidating principles that underlie computation in neural networks is
currently a major research topic of interest in neuroscience. Transfer Entropy
(TE) is increasingly used as a tool to bridge the gap between network
structure, function, and behavior in fMRI studies. Computational models allow
us to bridge the gap even further by directly associating individual neuron
activity with behavior. However, most computational models that have analyzed
embodied behaviors have employed non-spiking neurons. On the other hand,
computational models that employ spiking neural networks tend to be restricted
to disembodied tasks. We show for the first time the artificial evolution and
TE-analysis of embodied spiking neural networks to perform a
cognitively-interesting behavior. Specifically, we evolved an agent controlled
by an Izhikevich neural network to perform a visual categorization task. The
smallest networks capable of performing the task were found by repeating
evolutionary runs with different network sizes. Informational analysis of the
best solution revealed task-specific TE-network clusters, suggesting that
within-task homogeneity and across-task heterogeneity were key to behavioral
success. Moreover, analysis of the ensemble of solutions revealed that
task-specificity of TE-network clusters correlated with fitness. This provides
an empirically testable hypothesis that links network structure to behavior.Comment: Camera ready version of accepted for GECCO'1
Evolutionary and Computational Advantages of Neuromodulated Plasticity
The integration of modulatory neurons into evolutionary artificial neural networks is proposed here. A model of modulatory neurons was devised to describe a plasticity mechanism at the low level of synapses and neurons. No initial assumptions were made on the network structures or on the system level dynamics. The work of this thesis studied the outset of high level system dynamics that emerged employing the low level mechanism of neuromodulated plasticity. Fully-fledged control networks were designed by simulated evolution: an evolutionary algorithm could evolve networks with arbitrary size and topology using standard and modulatory neurons as building blocks. A set of dynamic, reward-based environments was implemented with the purpose of eliciting the outset of learning and memory in networks. The evolutionary time and the performance of solutions were compared for networks that could or could not use modulatory neurons. The experimental results demonstrated that modulatory neurons provide an evolutionary advantage that increases with the complexity of the control problem. Networks with modulatory neurons were also observed to evolve alternative neural control structures with respect to networks without neuromodulation. Different network topologies were observed to lead to a computational advantage such as faster input-output signal processing. The evolutionary and computational advantages induced by modulatory neurons strongly suggest the important role of neuromodulated plasticity for the evolution of networks that require temporal neural dynamics, adaptivity and memory functions
Non-Direct Encoding Method Based on Cellular Automata to Design Neural Network Architectures
Architecture design is a fundamental step in the successful application of Feed forward Neural Networks. In most cases a large number of neural networks architectures suitable to solve a problem exist and the architecture design is, unfortunately, still a human expert’s job. It depends heavily on the expert and on a tedious trial-and-error process. In the last years, many works have been focused on automatic resolution of the design of neural network architectures. Most of the methods are based on evolutionary computation paradigms. Some of the designed methods are based on direct representations of the parameters of the network. These representations do not allow scalability; thus, for representing large architectures very large structures are required. More interesting alternatives are represented by indirect schemes. They codify a compact representation of the neural network. In this work, an indirect constructive encoding scheme is proposed. This scheme is based on cellular automata representations and is inspired by the idea that only a few seeds for the initial configuration of a cellular automaton can produce a wide variety of feed forward neural networks architectures. The cellular approach is experimentally validated in different domains and compared with a direct codification scheme.Publicad
- …