25 research outputs found

    Genetic Assimilation and Canalisation in the Baldwin Effect

    No full text
    The Baldwin Effect indicates that individually learned behaviours acquired during an organism’s lifetime can influence the evolutionary path taken by a population, without any direct Lamarckian transfer of traits from phenotype to genotype. Several computational studies modelling this effect have included complications that restrict its applicability. Here we present a simplified model that is used to reveal the essential mechanisms and highlight several conceptual issues that have not been clearly defined in prior literature. In particular, we suggest that canalisation and genetic assimilation, often conflated in previous studies, are separate concepts and the former is actually not required for non-heritable phenotypic variation to guide genetic variation. Additionally, learning, often considered to be essential for the Baldwin Effect, can be replaced with a more general phenotypic plasticity model. These simplifications potentially permit the Baldwin Effect to operate in much more general circumstances

    Purposes and practices of text retrieval

    No full text

    Language Independent Answer Prediction from the Web

    No full text
    This work presents a strategy that aims to extract and rank predicted answers from the web based on the eigenvalues of a specially designed matrix. This matrix models the strength of the syntactic relations between words by means of the frequency of their relative positions in sentences extracted from web snippets. We assess the rank of predicted answers by extracting answer candidates for three different kinds of questions. Due to the low dependence upon a particular language, we also apply our strategy to questions from four different languages: English, German, Spanish, and Portuguese

    Evolution and Learning in Neural Networks: Dynamic Correlation, Relearning and Thresholding

    No full text
    This contribution revisits an earlier discovered observation that the average performance of a pop ulation of neural networks that are evolved to solve one task is improved by lifetime learning on a different task. Two extant, and very different, explanations of this phenomenon are examined- dynamic correlation, and relearning. Experimental results are presented which suggest that neither of these hypotheses can fully explain the phenomenon. A new explanation of the effect is proposed and empirically justified. This explanation is based on the fact that in these, and many other relat ed studies, real-valued neural network outputs are thresholded to provide discrete actions. The effect of such thresholding produces a particular type of fitness landscape in which lifetime learn ing can reduce the deleterious effects of mutation, and therefore increase mean population fitness. © 2000, Sage Publications. All rights reserved
    corecore