62 research outputs found
On the role of informativeness in spatial language comprehension
People need to know where objects are located in order to be able to
interact with the world, and spatial language provides the main
linguistic means of facilitating this. However, the information
contained in the description about objects locations is not the only
message conveyed; there is evidence in fact that people carry out
inferences that go beyond the simple geometric relation specified
(Coventry & Garrod, 2004; Tyler & Evans, 2003). People draw
inferences about objects dynamic and objects interaction, and these
information become critical for the apprehension of spatial language.
Among the inferences people draw from spatial language the
property of the converseness is particularly appealing; this principle
states that given the description "A is above B" one can also infers
"B is below A" (Leveit, 1984, 1996). Thus if the speaker says "the
book is above the telephone" implicitly the listener also knows that
the telephone is below the book.
However this extra information does not necessary facilitate the
apprehension of spatial descriptions. If it is true that inferences
increase the amount of information the description conveys
(Johnson-Laird & Byrne, 1991), it is also true that this "extra-information"
can be a disadvantage. In fact the spatial preposition
used in the description can end up in being ambiguous because it
suits more than one interpretation: The consequence is a reduction
of the informativeness (Bar-Hillel, 1964). Tyler and Evans (2003)
called this inferential process Best Fit. Speakers choose the spatial
preposition which offers the best fit between the conceptual spatial
relation and the speaker's communicative needs. This principle can
be considered a logical extension of the notion of relevance (Grice.
1975; Sperber & Wilson, 1986) and an integration for the Q-Principle
(Asher & Lascarides, 2003; Levinson, 2000a) according to which
speakers have the duty to avoid statements that are informationally
weaker than their knowledge of the world allows. This dissertation
explores whether the inferences people draw on spatial
representations, in particular those based on the converseness
principle (Levelt, 1996), will affect the process that drive the speaker
to choose the most informative description, that is the description
that best fit spatial relations and speaker needs (Tyler & Evans,
2003).
Experiment 1 and 2 study whether converseness, tested by
manipulating the orientation of the located object, affects the extent
to which a spatial description based on the preposition over, under,
above, below is regarded as a good description of those scenes.
Experiment 3 shows that the acceptability for a projective spatial
preposition is affected by the orientation of both the object presented
in the scene. Experiment 4 and 5 replicate the results achieved in the
previous experiments using polyoriented objects (Leek, 1998b) in
order to exclude the possibility that the decrease of acceptability
was due to the fact that one object was shown in a non-canonical
orientation. Experiment 6, 7 and 8 will provide evidence that
converseness generates ambiguous descriptions also with spatial
prepositions such as in front of, behind, on the left and to the right.
Finally Experiment 9 and 10 show that for proximity terms such as
near and far informativeness is not that relevant, but rather it seems
that people simply use contextual information to set a scale for their
judgments
The Role of Multiple Articulatory Channels of Sign-Supported Speech Revealed by Visual Processing
Purpose
The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed in relation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication.
Method
Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either the face or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message.
Results
In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased in the magnified condition. In Experiment 2, results indicated less accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech.
Conclusions
All participants, even those with residual hearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions.Unión Europea, Grant Agreement 31674
Investigating the Parameter Space of Cognitive Models of Spatial Language Comprehension
Kluth T, Burigo M, Knoeferle P. Investigating the Parameter Space of Cognitive Models of Spatial Language Comprehension. Presented at the 5. Interdisziplinärer Workshop Kognitive Systeme: Mensch, Teams, Systeme und Automaten. Verstehen, Beschreiben und Gestalten Kognitiver (Technischer) Systeme, Bochum.Cognitive models are – due to their computational nature – useful for the development and improvement of artificial cognitive systems. However, if two models perform equally well on the existent data, comparing them directly can permit us to select the more appropriate one. One way of comparing to models is to perform an in-depth analysis of their predictions. In this study, we compared the predictions of two similar cognitive models of spatial language comprehension using the Parameter Space Partitioning (PSP) algorithm proposed by Pitt, Kim, Navarro, and Myung (2006)
Visual gender cues elicit agent expectations: different mismatches in situated language comprehension
Abstract Previous research has shown that visual cues (depicted events) can have a strong effect on language comprehension and guide attention more than stereotypical thematic role knowledge ('depicted / recent event preference'). We examined to which extent this finding generalizes to another visual cue (gender from the hands of an agent) and to which extent it is modulated by picture-sentence incongruence. Participants inspected videos of hands performing an action, and then listened to non-canonical German OVS sentences while we monitored their eye gaze to the faces of two potential subjects / agents (one male and one female). In Experiment 1, the sentential verb phrase matched (vs. mismatched) the video action and in Experiment 2, the sentential subject matched (vs. mismatched) the gender of the agent's hands in the video. Additionally, both experiments manipulated gender stereotypicality congruence (i.e. whether the gender stereotypicality of the described actions matched or mismatched the gender of the hands in the video). Participants overall preferred to inspect the target agent face (i.e. the face whose gender matched that of the hands seen in the previous video), suggesting the depicted event preference observed in previous studies generalizes to visual gender cues. Stereotypicality match did not seem to modulate this gaze behavior. However, when there was a mismatch between the sentence and the previous video, participants tended to look away from the target face (post-verbally for action-verb mismatches and at the final subject region for hand gender -subject gender mismatches), suggesting outright picture-sentence incongruence can modulate the preference to inspect the face whose gender matched that of the hands seen in the previous video
Recommended from our members
Visual constraints modulate stereotypical predictability of agents during situated language comprehension
Rodriguez A, Burigo M, Knoeferle P. Visual constraints modulate stereotypical predictability of agents during situated language comprehension. In: Proceedings of the 38th Annual Cognitive Science Society Meeting. 2016
Learning and Using Abstract Words: Evidence from Clinical Populations
Lorusso ML, Burigo M, Tavano A, et al. Learning and Using Abstract Words: Evidence from Clinical Populations. BioMed Research International. 2017;2017:1-8
Spatial Language Comprehension. A Computational Investigation of the Directionality of Attention
Kluth T, Burigo M, Knoeferle P. Spatial Language Comprehension. A Computational Investigation of the Directionality of Attention. In: Gatt A, Mitterer H, eds. AMLaP. Architectures & Mechanisms for Language Processing 2015. Valetta, Malta: University of Malta; 2015: 88
Modeling the Directionality of Attention During Spatial Language Comprehension
Kluth T, Burigo M, Knoeferle P. Modeling the Directionality of Attention During Spatial Language Comprehension. In: van den Herik J, Filipe J, eds. Agents and Artificial Intelligence. ICAART 2016. Lecture Notes in Computer Science. Vol 10162. Cham: Springer International Publishing; 2017: 283-301.It is known that the comprehension of spatial prepositions
involves the deployment of visual attention. For example, consider the
sentence “The salt is to the left of the stove”. Researchers [29, 30] have
theorized that people must shift their attention from the stove (the reference object, RO) to the salt (the located object, LO) in order to com-
prehend the sentence. Such a shift was also implicitly assumed in the
Attentional Vector Sum (AVS) model by [35], a cognitive model that
computes an acceptability rating for a spatial preposition given a display that contains an RO and an LO. However, recent empirical findings
showed that a shift from the RO to the LO is not necessary to understand
a spatial preposition ( [3], see also [15, 38]). In contrast, these findings
suggest that people perform a shift in the reverse direction (i.e., from
the LO to the RO). Thus, we propose the reversed AVS (rAVS) model,
a modified version of the AVS model in which attention shifts from the
LO to the RO. We assessed the AVS and the rAVS model on the data
from [35] using three model simulation methods. Our simulations show
that the rAVS model performs as well as the AVS model on these data
while it also integrates the recent empirical findings. Moreover, the rAVS
model achieves its good performance while being less flexible than the
AVS model. (This article is an updated and extended version of the paper [23] presented at the 8th International Conference on Agents and
Artificial Intelligence in Rome, Italy. The authors would like to thank
Holger Schultheis for helpful discussions about the additional model simulation.
Shifts of Attention During Spatial Language Comprehension: A Computational Investigation
Kluth T, Burigo M, Knoeferle P. Shifts of Attention During Spatial Language Comprehension: A Computational Investigation. In: van den Herik J, Filipe J, eds. Proceedings of the 8th International Conference on Agents and Artificial Intelligence (ICAART 2016). Vol 2. Rome, Italy: SCITEPRESS – Science and Technology Publications, Lda.; 2016: 213-222.Regier and Carlson (2001) have investigated the processing of spatial prepositions and developed a cognitive model that formalizes how spatial prepositions are evaluated against depicted spatial relations between objects. In their Attentional Vector Sum (AVS) model, a population of vectors is weighted with visual attention, rooted at the reference object and pointing to the located object. The deviation of the vector sum from a reference direction is then used to evaluate the goodness-of-fit of the spatial preposition. Crucially, the AVS model assumes a shift of attention from the reference object to the located object. The direction of this shift has been challenged by recent psycholinguistic and neuroscientific findings. We propose a modified version of the AVS model (the rAVS model) that integrates these findings. In the rAVS model, attention shifts from the located object to the reference object in contrast to the attentional shift from the reference object to the located object implemented in the AVS model. Our model simulations show that the rAVS model accounts for both the data that inspired the AVS model and the most recent findings
Distinguishing Cognitive Models of Spatial Language Understanding
Kluth T, Burigo M, Schultheis H, Knoeferle P. Distinguishing Cognitive Models of Spatial Language Understanding. In: Reitter D, Ritter FE, eds. Proceedings of the 14th International Conference on Cognitive Modeling (ICCM 2016). University Park, Pennsylvania, USA: Penn State; 2016: 230-231
- …