177,286 research outputs found
On sense and reference: examining the functional neuroanatomy of referential processing
In an event-related fMRI study, we examined the cortical networks involved in establishing. reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., "Ronald told Frank that he..."), referentially failing pronouns (e.g., "Rose told Emily that he...") or coherent pronouns. Referential ambiguity selectively recruited media[ prefrontal regions, suggesting that readers engaged in problemsolving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with mo rp ho -syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language COmprehension. (c) 2007 Elsevier Inc. All rights reserved
Anaphoric Structure Emerges Between Neural Networks
Pragmatics is core to natural language, enabling speakers to communicate
efficiently with structures like ellipsis and anaphora that can shorten
utterances without loss of meaning. These structures require a listener to
interpret an ambiguous form - like a pronoun - and infer the speaker's intended
meaning - who that pronoun refers to. Despite potential to introduce ambiguity,
anaphora is ubiquitous across human language. In an effort to better understand
the origins of anaphoric structure in natural language, we look to see if
analogous structures can emerge between artificial neural networks trained to
solve a communicative task. We show that: first, despite the potential for
increased ambiguity, languages with anaphoric structures are learnable by
neural models. Second, anaphoric structures emerge between models 'naturally'
without need for additional constraints. Finally, introducing an explicit
efficiency pressure on the speaker increases the prevalence of these
structures. We conclude that certain pragmatic structures straightforwardly
emerge between neural networks, without explicit efficiency pressures, but that
the competing needs of speakers and listeners conditions the degree and nature
of their emergence.Comment: Published as a conference paper at the Annual Meeting of the
Cognitive Science Society 2023: 6 Pages, 3 Figures, code available at
https://github.com/hcoxec/emerg
Sense and preference
AbstractSemantic networks have shown considerable utility as a knowledge representation for Natural Language Processing (NLP). This paper describes a system for automatically deriving network structures from machine-readable dictionary text. This strategy helps to solve the problem of vocabulary acquisition for large-scale parsing systems, but also introduces an extra level of difficulty in terms of word-sense ambiguity. A Preference Semantics parsing system that operates over this network is discussed, in particular as regards its mechanism for using the network for lexical selection
- …