1,476 research outputs found
Searching for the Majority: Algorithms of Voluntary Control
Voluntary control of information processing is crucial to allocate resources and prioritize the processes that are most important under a given situation; the algorithms underlying such control, however, are often not clear. We investigated possible algorithms of control for the performance of the majority function, in which participants searched for and identified one of two alternative categories (left or right pointing arrows) as composing the majority in each stimulus set. We manipulated the amount (set size of 1, 3, and 5) and content (ratio of left and right pointing arrows within a set) of the inputs to test competing hypotheses regarding mental operations for information processing. Using a novel measure based on computational load, we found that reaction time was best predicted by a grouping search algorithm as compared to alternative algorithms (i.e., exhaustive or self-terminating search). The grouping search algorithm involves sampling and resampling of the inputs before a decision is reached. These findings highlight the importance of investigating the implications of voluntary control via algorithms of mental operations
Revealing the Functional Neuroanatomy of Intrinsic Alertness Using fMRI: Methodological Peculiarities
Clinical observations and neuroimaging data revealed a right-hemisphere fronto-parietal-thalamic-brainstem network for intrinsic alertness, and additional left fronto-parietal activity during phasic alertness. The primary objective of this fMRI study was to map the functional neuroanatomy of intrinsic alertness as precisely as possible in healthy participants, using a novel assessment paradigm already employed in clinical settings. Both the paradigm and the experimental design were optimized to specifically assess intrinsic alertness, while at the same time controlling for sensory-motor processing. The present results suggest that the processing of intrinsic alertness is accompanied by increased activity within the brainstem, thalamus, anterior cingulate gyrus, right insula, and right parietal cortex. Additionally, we found increased activation in the left hemisphere around the middle frontal gyrus (BA 9), the insula, the supplementary motor area, and the cerebellum. Our results further suggest that rather minute aspects of the experimental design may induce aspects of phasic alertness, which in turn might lead to additional brain activation in left-frontal areas not normally involved in intrinsic alertness. Accordingly, left BA 9 activation may be related to co-activation of the phasic alertness network due to the switch between rest and task conditions functioning as an external warning cue triggering the phasic alertness network. Furthermore, activation of the intrinsic alertness network during fixation blocks due to enhanced expectancy shortly before the switch to the task block might, when subtracted from the task block, lead to diminished activation in the typical right hemisphere intrinsic alertness network. Thus, we cautiously suggest that – as a methodological artifact – left frontal activations might show up due to phasic alertness involvement and intrinsic alertness activations might be weakened due to contrasting with fixation blocks, when assessing the functional neuroanatomy of intrinsic alertness with a block design in fMRI studies
Competition between auditory and visual spatial cues during visual task performance
There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is interaction between exogenous auditory and visual capture. Participants preformed an orthogonal cueing task, in which, the visual target was preceded by both a peripheral visual and auditory cue. When both cues were presented at chance level, visual and auditory capture was observed. However, when the validity of the visual cue was increased to 80% only visual capture and no auditory capture was observed. Furthermore, a highly predictive (80% valid) auditory cue was not able to prevent visual capture. These results demonstrate that crossmodal auditory capture does not occur when a competing predictive visual event is presented and is therefore not a fully automatic process
Trajectory curvature in saccade sequences:spatiotopic influences vs. residual motor activity
Saccades curve away from locations of previous fixation. Varying stimulus timing demonstrates the effects of both 1) spatiotopic representation and 2) motor residual activity from previous saccades. The spatiotopic effect can be explained if current models are augmented with an excitatory top-down spatiotopic signal.
When decisions drive saccadic eye movements, traces of the decision process can be inferred from the movement trajectories. For example, saccades can curve away from distractor stimuli, which was thought to reflect cortical inhibition biasing activity in the superior colliculus. Recent neurophysiological work does not support this theory, and two recent models have replaced top-down inhibition with lateral interactions in the superior colliculus or neural fatigue in the brainstem saccadic burst generator. All current models operate in retinotopic coordinates and are based on single saccade paradigms. To extend these models to sequences of saccades, we assessed whether and how saccade curvature depends on previously fixated locations and the direction of previous saccades. With a two-saccade paradigm, we first demonstrated that second saccades curved away from the initial fixation stimulus. Furthermore, by varying the time from fixation offset and the intersaccadic duration, we distinguished the extent of curvature originating from the spatiotopic representation of the previous fixation location or residual motor activity of the previous saccade. Results suggest that both factors drive curvature, and we discuss how these effects could be implemented in current models. In particular, we propose that the collicular retinotopic maps receive an excitatory spatiotopic update from the lateral interparial region
Illusory perceptions of space and time preserve cross-saccadic perceptual continuity
When voluntary saccadic eye movements are made to a silently ticking clock, observers sometimes think that the second hand takes longer than normal to move to its next position. For a short period, the clock appears to have stopped (chronostasis). Here we show that the illusion occurs because the brain extends the percept of the saccadic target backwards in time to just before the onset of the saccade. This occurs every time we move the eyes but it is only perceived when an external time reference alerts us to the phenomenon. The illusion does not seem to depend on the shift of spatial attention that accompanies the saccade. However, if the target is moved unpredictably during the saccade, breaking perception of the target's spatial continuity, then the illusion disappears. We suggest that temporal extension of the target's percept is one of the mechanisms that 'fill in' the perceptual 'gap' during saccadic suppression. The effect is critically linked to perceptual mechanisms that identify a target's spatial stability
Mantra: an open method for object and movement tracking
Mantra is a free and open-source software package for object tracking. It is specifically designed to be used as a tool for response collection in psychological experiments and requires only a computer and a camera (a webcam is sufficient). Mantra is compatible with widely used software for creating psychological experiments. In Experiments 1 and 2, we validated the spatial and temporal precision of Mantra in realistic experimental settings. In Experiments 3 and 4, we validated the spatial precision and accuracy of Mantra more rigorously by tracking a computer controlled physical stimulus and stimuli presented on a computer screen
Decision, Sensation, and Habituation: A Multi-Layer Dynamic Field Model for Inhibition of Return
Inhibition of Return (IOR) is one of the most consistent and widely studied effects in experimental psychology. The effect refers to a delayed response to visual stimuli in a cued location after initial priming at that location. This article presents a dynamic field model for IOR. The model describes the evolution of three coupled activation fields. The decision field, inspired by the intermediate layer of the superior colliculus, receives endogenous input and input from a sensory field. The sensory field, inspired by earlier sensory processing, receives exogenous input. Habituation of the sensory field is implemented by a reciprocal coupling with a third field, the habituation field. The model generates IOR because, due to the habituation of the sensory field, the decision field receives a reduced target-induced input in cue-target-compatible situations. The model is consistent with single-unit recordings of neurons of monkeys that perform IOR tasks. Such recordings have revealed that IOR phenomena parallel the activity of neurons in the intermediate layer of the superior colliculus and that neurons in this layer receive reduced input in cue-target-compatible situations. The model is also consistent with behavioral data concerning temporal expectancy effects. In a discussion, the multi-layer dynamic field account of IOR is used to illustrate the broader view that behavior consists of a tuning of the organism to the environment that continuously and concurrently takes place at different spatiotemporal scales
What we observe is biased by what other people tell us: beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues
For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes
- …