Emergence of Bayesian Structures from Recurrent Networks

Abstract

The problem of representational form has always limited the applicability of cognitive models: where symbolic representations have succeeded, distributed representations have failed, and vice-versa. Hybrid modeling is thus a promising venue, which however brings its share of new problems. For instance, it doubles the number of necessary assumptions. To counter this problem, we believe that one network should generate the other. This would require specific assumptions for only one network. In the present project, we plan to use a recurrent network to generate a Bayesian network. The former will be used to model lowlevel cognition while the latter will represent higher-level cognition. Moreover, both models will be active in every task and will need to communicate in order to generate a unique answer. General Problem In cognitive science, the problem of representational form is crucial. During the cognitive revolution, the computer metaphor was used to model human intelligence, which was thus seen as a set of symbol-manipulating syntactic processes (Turing, 1936). These processes were modeled as a series of conjunctive conditions and consequential actions (known as “IF-THEN ” rules). This modeling approach is referred to as the classical view (Russel & Norvig, 1995). In the late seventies, another metaphor became increasingly popular for modeling cognitive processes, namely: the brain. The connectionist (or “neural”) networks proposed during this period were mostly unsupervised networks, either competitive (Grossberg, 1976; Kohonen

    Similar works

    Full text

    thumbnail-image

    Available Versions