488 research outputs found

    How to shift bias: Lessons from the Baldwin effect

    Get PDF
    An inductive learning algorithm takes a set of data as input and generates a hypothesis as output. A set of data is typically consistent with an infinite number of hypotheses; therefore, there must be factors other than the data that determine the output of the learning algorithm. In machine learning, these other factors are called the bias of the learner. Classical learning algorithms have a fixed bias, implicit in their design. Recently developed learning algorithms dynamically adjust their bias as they search for a hypothesis. Algorithms that shift bias in this manner are not as well understood as classical algorithms. In this paper, we show that the Baldwin effect has implications for the design and analysis of bias shifting algorithms. The Baldwin effect was proposed in 1896, to explain how phenomena that might appear to require Lamarckian evolution (inheritance of acquired characteristics) can arise from purely Darwinian evolution. Hinton and Nowlan presented a computational model of the Baldwin effect in 1987. We explore a variation on their model, which we constructed explicitly to illustrate the lessons that the Baldwin effect has for research in bias shifting algorithms. The main lesson is that it appears that a good strategy for shift of bias in a learning algorithm is to begin with a weak bias and gradually shift to a strong bias

    Impact of alife simulation of Darwinian and Lamarckian evolutionary theories

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementUntil nowadays, the scientific community firmly rejected the Theory of Inheritance of Acquired Characteristics, a theory mostly associated with the name of Jean-Baptiste Lamarck (1774-1829). Though largely dismissed when applied to biological organisms, this theory found its place in a young discipline called Artificial Life. Based on the two abstract models of Darwinian and Lamarckian evolutionary theories built using neural networks and genetic algorithms, this research aims to present a notion of the potential impact of implementation of Lamarckian knowledge inheritance across disciplines. In order to obtain our results, we conducted a focus group discussion between experts in biology, computer science and philosophy, and used their opinions as qualitative data in our research. As a result of completing the above procedure, we have found some implications of such implementation in each mentioned discipline. In synthetic biology, this means that we would engineer organisms precisely up to our specific needs. At the moment, we can think of better drugs, greener fuels and dramatic changes in chemical industry. In computer science, Lamarckian evolutionary algorithms have been used for quite some years, and quite successfully. However, their application in strong ALife can only be approximated based on the existing roadmaps of futurists. In philosophy, creating artificial life seems consistent with nature and even God, if there is one. At the same time, this implementation may contradict the concept of free will, which is defined as the capacity for an agent to make choices in which the outcome has not been determined by past events. This study has certain limitations, which means that larger focus group and more prepared participants would provide more precise results

    A study of the Lamarckian evolution of recurrent neural networks

    Get PDF
    Version of RecordPublishe

    Evolutionary Algorithms for Reinforcement Learning

    Full text link
    There are two distinct approaches to solving reinforcement learning problems, namely, searching in value function space and searching in policy space. Temporal difference methods and evolutionary algorithms are well-known examples of these approaches. Kaelbling, Littman and Moore recently provided an informative survey of temporal difference methods. This article focuses on the application of evolutionary algorithms to the reinforcement learning problem, emphasizing alternative policy representations, credit assignment methods, and problem-specific genetic operators. Strengths and weaknesses of the evolutionary approach to reinforcement learning are presented, along with a survey of representative applications
    corecore