5,946 research outputs found

    How Laminar Frontal Cortex and Basal Ganglia Circuits Interact to Control Planned and Reactive Saccades

    Full text link
    The basal ganglia and frontal cortex together allow animals to learn adaptive responses that acquire rewards when prepotent reflexive responses are insufficient. Anatomical studies show a rich pattern of interactions between the basal ganglia and distinct frontal cortical layers. Analysis of the laminar circuitry of the frontal cortex, together with its interactions with the basal ganglia, motor thalamus, superior colliculus, and inferotemporal and parietal cortices, provides new insight into how these brain regions interact to learn and perform complexly conditioned behaviors. A neural model whose cortical component represents the frontal eye fields captures these interacting circuits. Simulations of the neural model illustrate how it provides a functional explanation of the dynamics of 17 physiologically identified cell types found in these areas. The model predicts how action planning or priming (in cortical layers III and VI) is dissociated from execution (in layer V), how a cue may serve either as a movement target or as a discriminative cue to move elsewhere, and how the basal ganglia help choose among competing actions. The model simulates neurophysiological, anatomical, and behavioral data about how monkeys perform saccadic eye movement tasks, including fixation; single saccade, overlap, gap, and memory-guided saccades; anti-saccades; and parallel search among distractors.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-l-0409, N00014-92-J-1309, N00014-95-1-0657); National Science Foundation (IRI-97-20333)

    Abnormal negative feedback processing in first episode schizophrenia: evidence from an oculomotor rule switching task

    Get PDF
    Background. Previous studies have shown that patients with schizophrenia are impaired on executive tasks, where positive and negative feedbacks are used to update task rules or switch attention. However, research to date using saccadic tasks has not revealed clear deficits in task switching in these patients. The present study used an oculomotor ‘ rule switching ’ task to investigate the use of negative feedback when switching between task rules in people with schizophrenia. Method. A total of 50 patients with first episode schizophrenia and 25 healthy controls performed a task in which the association between a centrally presented visual cue and the direction of a saccade could change from trial to trial. Rule changes were heralded by an unexpected negative feedback, indicating that the cue-response mapping had reversed. Results. Schizophrenia patients were found to make increased errors following a rule switch, but these were almost entirely the result of executing saccades away from the location at which the negative feedback had been presented on the preceding trial. This impairment in negative feedback processing was independent of IQ. Conclusions. The results not only confirm the existence of a basic deficit in stimulus–response rule switching in schizophrenia, but also suggest that this arises from aberrant processing of response outcomes, resulting in a failure to appropriately update rules. The findings are discussed in the context of neurological and pharmacological abnormalities in the conditions that may disrupt prediction error signalling in schizophrenia

    A Developmental Organization for Robot Behavior

    Get PDF
    This paper focuses on exploring how learning and development can be structured in synthetic (robot) systems. We present a developmental assembler for constructing reusable and temporally extended actions in a sequence. The discussion adopts the traditions of dynamic pattern theory in which behavior is an artifact of coupled dynamical systems with a number of controllable degrees of freedom. In our model, the events that delineate control decisions are derived from the pattern of (dis)equilibria on a working subset of sensorimotor policies. We show how this architecture can be used to accomplish sequential knowledge gathering and representation tasks and provide examples of the kind of developmental milestones that this approach has already produced in our lab

    Word skipping: implications for theories of eye movement control in reading

    Get PDF
    This chapter provides a meta-analysis of the factors that govern word skipping in reading. It is concluded that the primary predictor is the length of the word to be skipped. A much smaller effect is due to the processing ease of the word (e.g., the frequency of the word and its predictability in the sentence)

    Comparing the E-Z Reader Model to Other Models of Eye Movement Control in Reading

    Get PDF
    The E-Z Reader model provides a theoretical framework for understanding how word identification, visual processing, attention, and oculomotor control jointly determine when and where the eyes move during reading. Thus, in contrast to other reading models reviewed in this article, E-Z Reader can simultaneously account for many of the known effects of linguistic, visual, and oculomotor factors on eye movement control during reading. Furthermore, the core principles of the model have been generalized to other task domains (e.g., equation solving, visual search), and are broadly consistent with what is known about the architecture of the neural systems that support reading

    The influence of semantic context on initial eye landing sites in words

    Get PDF
    To determine the role of ongoing processing on eye guidance in reading, two studies examined the effects of semantic context on the eyes' initial landing position in words of different levels of processing diffculty. Results from both studies clearly indicate a shift of the initial fixation location towards the end of the words for words that can be predicted from a prior semantic context. However, shifts occur only in high-frequency words and with prior fixations close to the beginning of the target word. These results suggest that ongoing perceptual and linguistic processes can affect the decision of where to send the eyes next in reading. They are explained in terms of the easiness of processing associated with the target words when located in parafoveal vision. It is concluded that two critical factors might help observing effects of linguistic variables on initial landing sites, namely, the frequency of the target word and the position where the eyes are launched from as regards to the beginning of the target word. Results also provide evidence for an early locus of semantic context effects in reading

    Event Prediction and Object Motion Estimation in the Development of Visual Attention

    Get PDF
    A model of gaze control is describes that includes mechanisms for predictive control using a forward model and event driven expectations of target behavior. The model roughly undergoes stages similar to those of human infants if the influence of the predictive systems is gradually increased

    Putting culture under the spotlight reveals universal information use for face recognition

    Get PDF
    Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used ‘Spotlights’ with Gaussian apertures of 2°, 5° or 8° dynamically centered on observers’ fixations. Strikingly, in constrained Spotlight conditions (2°, 5°) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8°), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture
    corecore