A simplified drive-reinforcement model for unsupervised learning in artificial neural networks

Abstract

Partly in response to the apparent limitations of explicit symbol processing used by traditional artificial intelligence research, there has been, within the last decade, a growing interest in artificial neural networks. This thesis focuses on the development and testing of a model for describing certain kinds of biological phenomena. The many artificial neural networks available may be classified into three types: (1) self-organizing networks, which have input but no feedback; (2) unsupervised networks, requiring minimal feedback (perhaps a signal indicating success or failure); and (3) supervised models, which employ far more extensive (and, I think, biologically implausible) feedback mechanisms. In this thesis I examine only models of the second type. The Rescorla-Wagner trial-level model gives a quantitative description of what happens as a result of a conditioning trial. But that model, along with more detailed, temporal (i.e., intratrial) models, such as a traditional Hebbian model and the Sutton-Barto model, make predictions which are at odds with empirical data. Klopf\u27s drive-reinforcement model is a much more robust account, from which I develop a simplified drive-reinforcement (SDR) model. I prepare a number of experiments to test my SDR model\u27s correspondence with empirical data derived from animal learning experiments; I demonstrate that the model is capable of describing a wide variety of classical conditioning phenome na; and I 6how how the model may form the basis for instrumental conditioning as well. Finally, I add a simple motivating principle (or drive ) and show that such an addition seems to enhance the learning capabilities of the model

    Similar works