Article thumbnail

Models of Cognition: Neurological possibility does not indicate neurological plausibility

By Peter R. Krebs


Many activities in Cognitive Science involve complex computer models and simulations of both theoretical and real entities. Artificial Intelligence and the study of artificial neural nets in particular, are seen as major contributors in the quest for understanding the human mind. Computational models serve as objects of experimentation, and results from these virtual experiments are tacitly included in the framework of empirical science. Cognitive functions, like learning to speak, or discovering syntactical structures in language, have been modeled and these models are the basis for many claims about human cognitive capacities. Artificial neural nets (ANNs) have had some successes in the field of Artificial Intelligence, but the results from experiments with simple ANNs may have little value in explaining cognitive functions. The problem seems to be in relating cognitive concepts that belong in the `top-down' approach to models grounded in the `bottom-up' connectionist methodology. Merging the two fundamentally different paradigms within a single model can obfuscate what is really modeled. When the tools (simple artificial neural networks) to solve the problems (explaining aspects of higher cognitive functions) are mismatched, models with little value in terms of explaining functions of the human mind are produced. The ability to learn functions from data-points makes ANNs very attractive analytical tools. These tools can be developed into valuable models, if the data is adequate and a meaningful interpretation of the data is possible. The problem is, that with appropriate data and labels that fit the desired level of description, almost any function can be modeled. It is my argument that small networks offer a universal framework for modeling any conceivable cognitive theory, so that neurological possibility can be demonstrated easily with relatively simple models. However, a model demonstrating the possibility of implementation of a cognitive function using a distributed methodology, does not necessarily add support to any claims or assumptions that the cognitive function in question, is neurologically plausible

Topics: Neural Modelling, Philosophy of Science
Publisher: Lawrence Erlbaum
Year: 2005
OAI identifier:

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.

Suggested articles


  1. (1985). Artificial Intelligence: the Very Idea. doi
  2. (2003). Computational Developmental Psychology.
  3. (1988). Connectionism and cogintive architecture: A critical analysis. doi
  4. (1990). Finding structure in time.
  5. (1998). Introduction to Connectionist Modelling of Cognitive Processes.
  6. (1981). Knowlegde & the Flow of Information.
  7. (1993). Learning and development in neural networks: The importance of starting small.
  8. (2000). Minds, Brains, and Computers: The Foundations of Cognitive Science.
  9. (2001). Mindware: An Introduction to the Philosophy of Cognitive Science.
  10. (1996). On Learning the Past Tense of English Verbs.
  11. (1998). Paradigms of Artificial Intelligence. doi
  12. (2003). Philosophical Foundations of Neuroscience.
  13. (2003). Philosophy of Mind: Contemporary Readings.
  14. (1988). Representational Systems.
  15. (1998). Rethinking Innateness: A Connectionist Perspective on Development.
  16. (2001). Scientific models, connectionist networks, and cognitive science. doi
  17. (2004). Semantic Cognition: A Parallel Distributed Processing Approach. doi
  18. (1981). Semantic Engines: An Introduction to Mind Design.
  19. (2001). The foundations of cognitive science.
  20. (1998). Toward a Cognitive Neurobiology of the Moral Virtues. In Branquinho