97 research outputs found

    Motion seen and understood: interactions between language comprehension and visual perception.

    Get PDF
    Embodied theories of cognition state that the body plays a central role in cognitive representation. Under this description semantic representations, which constitute the meaning of words and sentences, are simulations of real experience that directly engage sensory and motor systems. This predicts interactions between comprehension and perception at low levels, since both engage the same systems, but the majority of evidence comes from picture judgements or visuo-spatial attention therefore it is not clear which visual processes are implicated. In addition, most of the work has concentrated on sentences rather than single words although theories predict that the semantics of both should be grounded in simulation. This investigation sought to systematically explore these interactions, using verbs that refer to upwards or downwards motion and sentences derived from the same set of verbs. As well as looking at visuo-spatial attention, we employed tasks routinely used in visual psychophysics that access low levels of motion processing. In this way we were able to separate different levels of visual processing and explore whether interactions between comprehension and perception were present when low level visual processes were assessed or manipulated. The results from this investigation show that: (1) There are bilateral interactions between low level visual processes and semantic content (lexical and sentential). (2) Interactions are automatic, arising whenever linguistic and visual stimuli are presented in close temporal contiguity. (3) Interactions are subject to processes within the visual system such as perceptual learning and suppression. (4) The precise content of semantic representations dictates which visual processes are implicated in interactions. The data is best explained by a close connection between semantic representation and perceptual systems when information from both is available it is automatically integrated. However, it does not support the direct and unmediated commitment of the visual system in the semantic representation of motion events. The results suggest a complex relationship between semantic representation and sensory-motor systems that can be explained by combining task specific processes with either strong or weak embodiment

    Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection

    Get PDF
    Background: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings: Participants completed an object detection task in which they made an object-presence or-absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d9). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affect
    corecore