74 research outputs found
Effects of syntactic context on eye movements during reading
Previous research has demonstrated that properties of a currently fixated word
and of adjacent words influence eye movement control in reading. In contrast to
such local effects, little is known about the global effects on eye movement
control, for example global adjustments caused by processing difficulty of
previous sentences. In the present study, participants read text passages in
which voice (active vs. passive) and sentence structure (embedded vs.
non-embedded) were manipulated. These passages were followed by identical target
sentences. The results revealed effects of previous sentence structure on gaze
durations in the target sentence, implying that syntactic properties of
previously read sentences may lead to a global adjustment of eye movement
control
Do successor effects in reading reflect lexical parafoveal processing? Evidence from corpus-based and experimental eye movement data
Abstract In the past, most research on eye movements during reading involved a limited number of subjects reading sentences with specific experimental manipulations on target words. Such experiments usually only analyzed eye-movements measures on and around the target word. Recently, some researchers have started collecting larger data sets involving large and diverse groups of subjects reading large numbers of sentences, enabling them to consider a larger number of influences and study larger and more representative subject groups. In such corpus studies, most of the words in a sentence are analyzed. The complexity of the design of corpus studies and the many potentially uncontrolled influences in such studies pose new issues concerning the analysis methods and interpretability of the data. In particular, several corpus studies of reading have found an effect of successor word (n + 1) frequency on current word (n) fixation times, while studies employing experimental manipulations tend not to. The general interpretation of corpus studies suggests that readers obtain parafoveal lexical information from the upcoming word before they have finished identifying the current word, while the experimental manipulations shed doubt on this claim. In the present study, we combined a corpus analysis approach with an experimental manipulation (i.e., a parafoveal modification of the moving mask technique, Rayner & Bertera, 1979), so that, either (a) word n+1, (b) word n+;2, (c) both words, or (d) neither word was masked. We found that denying preview for either or both parafoveal words increased average fixation times. Furthermore, we found successor effects similar to those reported in the corpus studies. Importantly, these successor effects were found even when the parafoveal word was masked, suggesting that apparent successor frequency effects may be due to causes that are unrelated to lexical parafoveal preprocessing. We discuss the implications of this finding both for parallel and serial accounts of word identification and for the interpretability of large correlational studies of word identification in reading in general
The role of left and right hemispheres in the comprehension of idiomatic language: an electrical neuroimaging study
<p>Abstract</p> <p>Background</p> <p>The specific role of the two cerebral hemispheres in processing idiomatic language is highly debated. While some studies show the involvement of the left inferior frontal gyrus (LIFG), other data support the crucial role of right-hemispheric regions, and particularly of the middle/superior temporal area. Time-course and neural bases of literal vs. idiomatic language processing were compared. Fifteen volunteers silently read 360 idiomatic and literal Italian sentences and decided whether they were semantically related or unrelated to a following target word, while their EEGs were recorded from 128 electrodes. Word length, abstractness and frequency of use, sentence comprehensibility, familiarity and cloze probability were matched across classes.</p> <p>Results</p> <p>Participants responded more quickly to literal than to idiomatic sentences, probably indicating a difference in task difficulty. Occipito/temporal N2 component had a greater amplitude in response to idioms between 250-300 ms. Related swLORETA source reconstruction revealed a difference in the activation of the left fusiform gyrus (FG, BA19) and medial frontal gyri for the contrast idiomatic-minus-literal. Centroparietal N400 was much larger to idiomatic than to literal phrases (360-550 ms). The intra-cortical generators of this effect included the left and right FG, the left cingulate gyrus, the right limbic area, the right MTG (BA21) and the left middle frontal gyrus (BA46). Finally, an anterior late positivity (600-800 ms) was larger to idiomatic than literal phrases. ERPs also showed a larger right centro-parietal N400 to associated than non-associated targets (not differing as a function of sentence type), and a greater right frontal P600 to idiomatic than literal associated targets.</p> <p>Conclusion</p> <p>The data indicate bilateral involvement of both hemispheres in idiom comprehension, including the right MTG after 350 ms and the right medial frontal gyrus in the time windows 270-300 and 500-780 ms. In addition, the activation of left and right limbic regions (400-450 ms) suggests that they have a role in the emotional connotation of colourful idiomatic language. The data support the view that there is direct access to the idiomatic meaning of figurative language, not dependent on the suppression of its literal meaning, for which the LIFG was previously thought to be responsible.</p
Reading during the composition of multi-sentence texts: an eye-movement study
Writers composing multi-sentence texts have immediate access to a visual representation of what they have written. Little is known about the detail of writers’ eye movements within this text during production. We describe two experiments in which competent adult writers’ eye-movements were tracked while performing short expository writing tasks. These are contrasted with conditions in which participants read and evaluated researcher-provided texts. Writers spent a mean of around 13% of their time looking back into their text. Initiation of these look-back sequences was strongly predicted by linguistically important boundaries in their ongoing production (e.g., writers were much more likely to look back immediately prior to starting a new sentence). 36% of look-back sequences were associated with sustained reading and the remainder with less patterned forward and backward saccades between words ("hopping"). Fixation and gaze durations and the presence of word-length effects suggested lexical processing of fixated words in both reading and hopping sequences. Word frequency effects were not present when writers read their own text. Findings demonstrate the technical possibility and potential value of examining writers’ fixations within their just-written text. We suggest that these fixations do not serve solely, or even primarily, in monitoring for error, but play an important role in planning ongoing production
Integrating Mechanisms of Visual Guidance in Naturalistic Language Production
Situated language production requires the integration of visual attention and lin-guistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate percep-tual (scene clutter) and conceptual guidance (cue animacy), and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of lan-guage production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of atten-tional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan-pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention
Finding the locus of semantic satiation: An electrophysiological attempt
International audienceno abstrac
- …