143 research outputs found
Recommended from our members
Structured Representations and Connectionist Models
Recent descriptions of connectionist models have argued that connectionist representations are unstructured, atomic, and bounded(e.g., Fodor & Pylyshyn, 1988). This paper describes results with recurrent networks anddistributed representations which contest these claims. Simulation results are described which demonstrate that connectionist networks are able to learn representations which are richly structured and open-ended. These representations make use both of the high dimensional space described by hidden unit patterns, as well as trajectories through this space in time, and posses a rich structure which reflects regularities in the input.Specific proposals are advanced which address the type/token distinction, the representation of hierarchical categories in language, and there presentation of grammatical structure
A Model of Event Knowledge
We present a connectionist model of event knowledge that is trained on examples of sequences of activities that are not explicitly labeled as events. The model learns co-occurrence patterns among the components of activities as they occur in the moment (entities, actions, and contexts), and also learns to predict sequential patterns of activities. In so doing, the model displays behaviors that in humans have been characterized as exemplifying inferencing of unmentioned event components, the prediction of upcoming components (which may or may not ever happen or be mentioned), reconstructive memory, and the ability to flexibly accommodate novel variations from previously encountered experiences. All of these behaviors emerge from what the model learns
Recommended from our members
Representing Variable Information with Simple Recurrent Networks
How might simple recurrent networks represent co-occurrence relationships such as those holding between a script setting (e.g., "clothing store") and a script item ("shirt") or those that specify the feature match between the gender of a pronoun and its antecedent? These issues were investigated by training a simple recurrent network to predict the successive items in various instantiations of a script. The network readily learned the script in that it performed flawlessly on the non-variable items and only activated the correct type of role filler in the variable slots. However, its ability to activate the target filler depended on the recency of the last script variable. The network's representation of the script can be viewed as a trajectory through multidimensional state space. Different versions of the script are represented as variations of the trajectory. This perspective suggests a new conception of how networks might represent a longdistance binding between two items. The binding must be seen as not existing between an antecedent and a target, but between a target item and the current global state
Prediction-Based Learning and Processing of Event Knowledge.
Knowledge of common events is central to many aspects of cognition. Intuitively, it seems as though events are linear chains of the activities of which they are comprised. In line with this intuition, a number of theories of the temporal structure of event knowledge have posited mental representations (data structures) consisting of linear chains of activities. Competing theories focus on the hierarchical nature of event knowledge, with representations comprising ordered scenes, and chains of activities within those scenes. We present evidence that the temporal structure of events typically is not well-defined, but it is much richer and more variable both within and across events than has usually been assumed. We also present evidence that prediction-based neural network models can learn these rich and variable event structures and produce behaviors that reflect human performance. We conclude that knowledge of the temporal structure of events in the human mind emerges as a consequence of prediction-based learning
The Wind Chilled the Spectators, but the Wine Just Chilled: Sense, Structure, and Sentence Comprehension
Anticipation plays a role in language comprehension. In this article, we explore the extent to which verb sense influences expectations about upcoming structure. We focus on change of state verbs like shatter, which have different senses that are expressed in either transitive or intransitive structures, depending on the sense that is used. In two experiments we influence the interpretation of verb sense by manipulating the thematic fit of the grammatical subject as cause or affected entity for the verb, and test whether readers’ expectations for a transitive or intransitive structure change as a result. This sense-biasing context influenced reading times in the postverbal regions. Reading times for transitive sentences were faster following good-cause than good-theme subjects, but the opposite pattern was found for intransitive sentences. We conclude that readers use sense-contingent subcategorization preferences during on-line comprehension
Coherence and Coreference Revisited
For more than three decades, research into the psycholinguistics of pronoun interpretation has argued that hearers use various interpretation ‘preferences ’ or ‘strategies’ that are associated with specific linguistic properties of antecedent expressions. This focus is a departure from the type of approach outlined in Hobbs (1979), who argues that the mechanisms supporting pronoun interpretation are driven predominantly by semantics, world knowledge and inference, with particular attention to how these are used to establish the coherence of a discourse. On the basis of three new experimental studies, we evaluate a coherence-driven analysis with respect to four previously proposed interpretation biases—based on grammatical role parallelism, thematic roles, implicit causality, and subjecthood—and argue that the coherence-driven analysis can explain the underlying source of the biases and predict in what contexts evidence for each will surface. The results further suggest that pronoun interpretation is incrementally influenced by probabilistic expectations that hearers have regarding what coherence relations are likely to ensue, together with their expectations about what entities will be mentioned next, which, crucially, are conditioned on those coherence relations
- …