1,054 research outputs found

    The effect of high- and low-frequency previews and sentential fit on word skipping during reading

    Get PDF
    In a previous gaze-contingent boundary experiment, Angele and Rayner (2013) found that readers are likely to skip a word that appears to be the definite article the even when syntactic constraints do not allow for articles to occur in that position. In the present study, we investigated whether the word frequency of the preview of a 3-letter target word influences a reader’s decision to fixate or skip that word. We found that the word frequency rather than the felicitousness (syntactic fit) of the preview affected how often the upcoming word was skipped. These results indicate that visual information about the upcoming word trumps information from the sentence context when it comes to making a skipping decision. Skipping parafoveal instances of the therefore may simply be an extreme case of skipping high-frequency words

    Driving forces in free visual search : An ethology

    Get PDF
    Peer reviewedPostprin

    How Sensitive Are Our Eyes to Text Difficulty? : Application of Schema Fixation Curves to Japanese Text

    Get PDF
    This paper discusses the applicability of Schema Fixation Curves to the detection of changes in the behavior of eye movements in accordance with the readability of text. If the eyes are to respond to the degree of difficulty of the given task, we may say that the eyes are an output device of our cognitive activities. Our previous research led us to the notation of Schema Fixation and Schema Fixation Curves, a technique with which graphically analyze the cognitive load the subjects bear when they read texts. The results of our experiments based on this technique show that the eye movement records are a good clue to the detection of text difficulty or readability of texts. Conventionally, computer-calculated readability indices have been used to predict text readability, but the precision of the prediction may not necessarily be so high. This is because most of these indices use syntactic elements of text such as average sentence length and word length. Difficulty of texts arises from a variety of factors, such as the reader\u27s background knowledge of the passage, the range of vocabulary used in the text, syntactic and semantic ambiguities, etc. In this experiment, we used the Japanese language in order to focus on syntactic effect on readability. Japanese allows much freer syntactic structure than present-day English. For example, the natural, normal, and unstressed word order of English (from amongst the six logical possibilities, SVO, SOV, VSO, VOS, OSV, OVS) is SVO while various combinations are both possible and natural in Japanese. We changed the syntactic order of words in sentences and presented them to the subjects in order to examine the recorded eye movements, and found that different orders produced different levels of readability

    SEAM: An Integrated Activation-Coupled Model of Sentence Processing and Eye Movements in Reading

    Full text link
    Models of eye-movement control during reading, developed largely within psychology, usually focus on visual, attentional, lexical, and motor processes but neglect post-lexical language processing; by contrast, models of sentence comprehension processes, developed largely within psycholinguistics, generally focus only on post-lexical language processes. We present a model that combines these two research threads, by integrating eye-movement control and sentence processing. Developing such an integrated model is extremely challenging and computationally demanding, but such an integration is an important step toward complete mathematical models of natural language comprehension in reading. We combine the SWIFT model of eye-movement control (Seelig et al., 2020, doi:10.1016/j.jmp.2019.102313) with key components of the Lewis and Vasishth sentence processing model (Lewis & Vasishth, 2005, doi:10.1207/s15516709cog0000_25). This integration becomes possible, for the first time, due in part to recent advances in successful parameter identification in dynamical models, which allows us to investigate profile log-likelihoods for individual model parameters. We present a fully implemented proof-of-concept model demonstrating how such an integrated model can be achieved; our approach includes Bayesian model inference with Markov Chain Monte Carlo (MCMC) sampling as a key computational tool. The integrated model, SEAM, can successfully reproduce eye movement patterns that arise due to similarity-based interference in reading. To our knowledge, this is the first-ever integration of a complete process model of eye-movement control with linguistic dependency completion processes in sentence comprehension. In future work, this proof of concept model will need to be evaluated using a comprehensive set of benchmark data

    Spontaneous eye movements during passive spoken language comprehension reflect grammatical processing

    Get PDF
    Language is tightly connected to sensory and motor systems. Recent research using eye- tracking typically relies on constrained visual contexts, viewing a small array of objects on a computer screen. Some critiques of embodiment ask if people simply match their simulations to the pictures being presented. This study compared the comprehension of verbs with two different grammatical forms: the past progressive form (e.g., was walking), which emphasizes the ongoing nature of actions, and the simple past (e.g., walked), which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli. Thus, eye movement data suggest that visual inputs are unnecessary to solicit perceptual simulations

    Representation, space and Hollywood Squares: Looking at things that aren't there anymore

    Get PDF
    It has been argued that the human cognitive system is capable of using spatial indexes or oculomotor coordinates to relieve working memory load (Ballard, Hayhoe, Pook & Rao, 1997) track multiple moving items through occlusion (Scholl & Pylyshyn, 1999) or link incompatible cognitive and sensorimotor codes (Bridgeman and Huemer, 1998). Here we examine the use of such spatial information in memory for semantic information. Previous research has often focused on the role of task demands and the level of automaticity in the encoding of spatial location in memory tasks. We present five experiments where location is irrelevant to the task, and participants' encoding of spatial information is measured implicitly by their looking behavior during recall. In a paradigm developed from Spivey and Geng (submitted), participants were presented with pieces of auditory, semantic information as part of an event occurring in one of four regions of a computer screen. In front of a blank grid, they were asked a question relating to one of those facts. Under certain conditions it was found that during the question period participants made significantly more saccades to the empty region of space where the semantic information had been previously presented. Our findings are discussed in relation to previous research on memory and spatial location, the dorsal and ventral streams of the visual system, and the notion of a cognitive-perceptual system using spatial indexes to exploit the stability of the external world

    Effects of Processing Difficulty on Eye Movements in Reading: A Review of Behavioral and Neural Observations

    Get PDF
    In reading, text difficulties increase the duration of eye fixation and the frequency of refixation and regression. The present article reviews previous attempts to quantify these effects based on the frequency of effect theory (FET), and links these effects to results from microstimulation of primate supple-mentary eye fields. Observed stimulation effects on the latency and frequency of visually-guided saccades depend on the onset time of electric current relative to target onset, and the strength of applied current. Resultant saccade delay was only observed for those made towards a highly predictive location ipsilateral to stimulated SEF sites. These findings are inter-preted in the context of reading, where the detection of processing difficulty allows a suppression signal to supersede a forward saccade signal in a time race. This in turn permits a cognitively-based refixation/regression to be initiated in place of the suppressed forward saccade
    corecore