3 research outputs found

    Distributed representations accelerate evolution of adaptive behaviours

    Get PDF
    Animals with rudimentary innate abilities require substantial learning to transform those abilities into useful skills, where a skill can be considered as a set of sensory - motor associations. Using linear neural network models, it is proved that if skills are stored as distributed representations, then within- lifetime learning of part of a skill can induce automatic learning of the remaining parts of that skill. More importantly, it is shown that this " free- lunch'' learning ( FLL) is responsible for accelerated evolution of skills, when compared with networks which either 1) cannot benefit from FLL or 2) cannot learn. Specifically, it is shown that FLL accelerates the appearance of adaptive behaviour, both in its innate form and as FLL- induced behaviour, and that FLL can accelerate the rate at which learned behaviours become innate

    Relearning and Evolution in Neural Networks.

    No full text
    of neural networks that evolve (to get tter at one task) at the population level and may also learn (a di erent task) at the individual level. One result stated was that average tness at the evolutionary task is improved when lifetime learning at the di erent task is introduced. A di erent explanation will be proposed here for much of the data there presented: that the main results are an artefact of the unconventional evolutionary algorithm used, and can be interpreted rather di erently as a form of relearning. Asexual evolution (mutation only) was used on a population of 100 individuals or animats � the connection weights were genetically speci ed for a feedforward network for each individual, which transformed sensory inputs of the animat into movements over a grid-likeenvironment on which food had to be found. Mutation of o spring of selected parents perturbed the values of 5 of their weights chosen at random. The selective pressure used was exceptionally strong. Whereas in population genetics selective di erences are typically of the order of 1%, and with conventional genetic algorithms selective pressures are kept low toavoid premature convergence, here the ttest members have 500%moreo springthanthe average | the top 20 out of 100 each have5 o spring. In the absence of mutation such selection results in the elite taking over the whole population in just 3 generations (from 1 % to 5 % to 25 % to 100%)

    Relearning and Evolution in Neural Networks

    No full text
    this paper (Parisi, Nolfi, & Cecconi, 1992). The performance of the elite did not improve when lifetime learning of the second task was introduced, whereas average performance did improve. It seems clear that the effect of lifetime learning was merely to go some way towards restoring performance of networks which had had their weights perturbed (by mutation) away from trained (through evolution) values --- a form of relearning. The extreme convergence of the population around the clustered elite members of the previous generation should be borne in mind when reading from (Nolfi et al., 1994), p. 22: The offspring of a reproducing individual occupy initial positions in weight space that are deviations (due to mutations) from the position occupied by their parent at birth (i.e., prior to learning). One form of relearning in networks was analysed in (Hinton & Plaut, 1987). In that case a network is first trained by some learning algorithm on a set of input/output pairs; the weights are then perturbed. After retraining on a subset of the original training set, it is found that performance improves also on the balance of the original training set. The present case differs from this, in that the lifetime learning is on a fresh task, rather than on a subset of the original task. Recently just such an effect was predicted and observed in networks (Harvey & Stone, 1995). When good performance on one task is degraded by random perturbations of the weights, then in general training on any unrelated second task can be expected to improve, at least initially, the performance on the first task. C P Q B B A 1 2 Figure 1: A two-dimensional sketch of weight space. To briefly summarise the reasons for this, consider the diagram, which represents the weight space of a network in just 2 ..
    corecore