Skip to main content
Article thumbnail
Location of Repository

How to shift bias: Lessons from the Baldwin effect

By Peter D. Turney

Abstract

An inductive learning algorithm takes a set of data as input and generates a hypothesis as output. A set of data is typically consistent with an infinite number of hypotheses; therefore, there must be factors other than the data that determine the output of the learning algorithm. In machine learning, these other factors are called the bias of the learner. Classical learning algorithms have a fixed bias, implicit in their design. Recently developed learning algorithms dynamically adjust their bias as they search for a hypothesis. Algorithms that shift bias in this manner are not as well understood as classical algorithms. In this paper, we show that the Baldwin effect has implications for the design and analysis of bias shifting algorithms. The Baldwin effect was proposed in 1896, to explain how phenomena that might appear to require Lamarckian evolution (inheritance of acquired characteristics) can arise from purely Darwinian evolution. Hinton and Nowlan presented a computational model of the Baldwin effect in 1987. We explore a variation on their model, which we constructed explicitly to illustrate the lessons that the Baldwin effect has for research in bias shifting algorithms. The main lesson is that it appears that a good strategy for shift of bias in a learning algorithm is to begin with a weak bias and gradually shift to a strong bias

Topics: Evolution, Machine Learning, Statistical Models
Year: 1996
OAI identifier: oai:cogprints.org:1818

Suggested articles

Citations

  1. (1994). A conservation law for generalization performance.
  2. (1986). A general framework for induction and a study of selective induction.
  3. A new factor in evolution.
  4. (1993). Adding learning to the cellular development of neural networks: Evolution and the Baldwin effect. doi
  5. (1989). Building robust learning systems by combining induction and optimization.
  6. (1995). Cost-sensitive classification: Empirical evaluation of a hybrid genetic decision tree induction algorithm.
  7. (1995). Evolutionary design of neural architectures: A preliminary taxonomy and guide to literature.
  8. (1995). Hybrid learning using genetic algorithms and decision tress for pattern classification.
  9. (1995). Inductive policy: The pragmatics of bias selection.
  10. (1991). Interactions between learning and evolution.
  11. (1994). Lamarckian evolution, the Baldwin effect and function optimization.
  12. (1994). Learning and evolution in neural networks.
  13. (1995). Learning and evolution: A quantitative genetics approach.
  14. (1994). Off-training set error and a priori distinctions between learning algorithms.
  15. (1992). On the connection between in-sample testing and generalization error.
  16. (1996). Ontogenic and phylogenic variation.
  17. (1993). Selecting a classification method by cross-validation.
  18. (1992). Simplifying neural networks by soft weight-sharing.
  19. (1994). The Language Instinct: How the Mind Creates Language.

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.