Article thumbnail
Location of Repository

On the emergence of rules in neural networks

By Stephen Jose Hanson and Michiro Negishi

Abstract

A simple associationist neural network learns to factor abstract rules (i.e. representations which accommodate unseen symbol sets as well as unseen but similar grammars. The neural network is shown to have the ability to transfer grammatical knowledge to both new symbol vocabularies and new grammars. structures of the input, and is not simply memorizing the input strings. These representations are context sensitive, hierarchical and are based on the state variable of the finite state machines that the neural network has learned. Generalization to new symbol sets or grammars arises from the spatial nature of the internal representations used by the network, allowing new symbol sets to be encoded close to symbol sets that have already been learned in the hidden unit space of the network. The results are counter to the arguments that learning algorithms based on weight adaptation after each exemplar presentation (such as the long term potentiation found in the mammalian nervous system) cannot in principle extract symbolic knowledge from positive examples as prescribed by prevailing human linguistic theory and evolutionary psychology. 1

Year: 2002
DOI identifier: 10.1162/089976602320264079
OAI identifier: oai:CiteSeerX.psu:10.1.1.131.9618
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.rumba.rutgers.edu/p... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.