3,528 research outputs found
Session 5: Development, Neuroscience and Evolutionary Psychology
Proceedings of the Pittsburgh Workshop in History and Philosophy of Biology, Center for Philosophy of Science, University of Pittsburgh, March 23-24 2001 Session 5: Development, Neuroscience and Evolutionary Psycholog
Modular Networks: Learning to Decompose Neural Computation
Scaling model capacity has been vital in the success of deep learning. For a
typical network, necessary compute resources and training time grow
dramatically with model size. Conditional computation is a promising way to
increase the number of parameters with a relatively small increase in
resources. We propose a training algorithm that flexibly chooses neural modules
based on the data to be processed. Both the decomposition and modules are
learned end-to-end. In contrast to existing approaches, training does not rely
on regularization to enforce diversity in module use. We apply modular networks
both to image recognition and language modeling tasks, where we achieve
superior performance compared to several baselines. Introspection reveals that
modules specialize in interpretable contexts.Comment: NIPS 201
Evolution of Neural Networks for Helicopter Control: Why Modularity Matters
The problem of the automatic development of controllers for vehicles for which the exact characteristics are not known is considered in the context of miniature helicopter flocking. A methodology is proposed in which neural network based controllers are evolved in a simulation using a dynamic model qualitatively similar to the physical helicopter. Several network architectures and evolutionary sequences are investigated, and two approaches are found that can evolve very competitive controllers. The division of the neural network into modules and of the task into incremental steps seems to be a precondition for success, and we analyse why this might be so
Guiding Neuroevolution with Structural Objectives
The structure and performance of neural networks are intimately connected,
and by use of evolutionary algorithms, neural network structures optimally
adapted to a given task can be explored. Guiding such neuroevolution with
additional objectives related to network structure has been shown to improve
performance in some cases, especially when modular neural networks are
beneficial. However, apart from objectives aiming to make networks more
modular, such structural objectives have not been widely explored. We propose
two new structural objectives and test their ability to guide evolving neural
networks on two problems which can benefit from decomposition into subtasks.
The first structural objective guides evolution to align neural networks with a
user-recommended decomposition pattern. Intuitively, this should be a powerful
guiding target for problems where human users can easily identify a structure.
The second structural objective guides evolution towards a population with a
high diversity in decomposition patterns. This results in exploration of many
different ways to decompose a problem, allowing evolution to find good
decompositions faster. Tests on our target problems reveal that both methods
perform well on a problem with a very clear and decomposable structure.
However, on a problem where the optimal decomposition is less obvious, the
structural diversity objective is found to outcompete other structural
objectives -- and this technique can even increase performance on problems
without any decomposable structure at all
- …