33,804 research outputs found

    Motivational context for response inhibition influences proactive involvement of attention

    Get PDF
    Motoric inhibition is ingrained in human cognition and implicated in pervasive neurological diseases and disorders. The present electroencephalographic (EEG) study investigated proactive motivational adjustments in attention during response inhibition. We compared go-trial data from a stop-signal task, in which infrequently presented stop-signals required response cancellation without extrinsic incentives ("standard-stop"), to data where a monetary reward was posted on some stop-signals ("rewarded-stop"). A novel EEG analysis was used to directly model the covariation between response time and the attention-related N1 component. A positive relationship between response time and N1 amplitudes was found in the standard-stop context, but not in the rewarded-stop context. Simultaneously, average go-trial N1 amplitudes were larger in the rewarded-stop context. This suggests that down-regulation of go-signal-directed attention is dynamically adjusted in the standard-stop trials, but is overridden by a more generalized increase in attention in reward-motivated trials. Further, a diffusion process model indicated that behavior between contexts was the result of partially opposing evidence accumulation processes. Together these analyses suggest that response inhibition relies on dynamic and flexible proactive adjustments of low-level processes and that contextual changes can alter their interplay. This could prove to have ramifications for clinical disorders involving deficient response inhibition and impulsivity

    Effect of reward contingencies on multiple-target visual search

    Get PDF
    It has long been known that human beings’ search behaviour is influenced by different mechanisms of control of attention: we can voluntarily pay attention, according to the context-specific goals, or we can involuntarily direct it, guided by the physical conspicuity of perceptual objects. Recent evidence suggests that pairing target stimuli with reward can modulate the way in which we voluntarily deploy our attention. In this thesis, the explored line of research focuses on the effects of reward, specifically a monetary reward: neutral stimuli are imbued with value via associative learning, through a training phase. This work aims to investigate if these stimuli will be able to capture attention in a subsequent foraging task. This mechanism, known as value-driven attentional capture, has never been investigated in a foraging context, but only in a classical visual search one: will it be able to influence the search behaviour when the targets are multiple

    Self-directedness, integration and higher cognition

    Get PDF
    In this paper I discuss connections between self-directedness, integration and higher cognition. I present a model of self-directedness as a basis for approaching higher cognition from a situated cognition perspective. According to this model increases in sensorimotor complexity create pressure for integrative higher order control and learning processes for acquiring information about the context in which action occurs. This generates complex articulated abstractive information processing, which forms the major basis for higher cognition. I present evidence that indicates that the same integrative characteristics found in lower cognitive process such as motor adaptation are present in a range of higher cognitive process, including conceptual learning. This account helps explain situated cognition phenomena in humans because the integrative processes by which the brain adapts to control interaction are relatively agnostic concerning the source of the structure participating in the process. Thus, from the perspective of the motor control system using a tool is not fundamentally different to simply controlling an arm

    SOVEREIGN: An Autonomous Neural System for Incrementally Learning Planned Action Sequences to Navigate Towards a Rewarded Goal

    Full text link
    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goaloriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and sizeinvariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.Riverside Reserach Institute; Defense Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0225); National Science Foundation (IRI 90-24877, SBE-0345378); Office of Naval Research (N00014-92-J-1309, N00014-91-J-4100, N00014-01-1-0624, N00014-01-1-0624); Pacific Sierra Research (PSR 91-6075-2

    Recurrent Models of Visual Attention

    Full text link
    Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so

    Learned value and object perception: Accelerated perception or biased decisions?

    Get PDF
    Learned value is known to bias visual search toward valued stimuli. However, some uncertainty exists regarding the stage of visual processing that is modulated by learned value. Here, we directly tested the effect of learned value on preattentive processing using temporal order judgments. Across four experiments, we imbued some stimuli with high value and some with low value, using a nonmonetary reward task. In Experiment 1, we replicated the value-driven distraction effect, validating our nonmonetary reward task. Experiment 2 showed that high-value stimuli, but not low-value stimuli, exhibit a prior-entry effect. Experiment 3, which reversed the temporal order judgment task (i.e., reporting which stimulus came second), showed no prior-entry effect, indicating that although a response bias may be present for high-value stimuli, they are still reported as appearing earlier. However, Experiment 4, using a simultaneity judgment task, showed no shift in temporal perception. Overall, our results support the conclusion that learned value biases perceptual decisions about valued stimuli without speeding preattentive stimulus processing

    Sharing a Context with Other Rewarding Events Increases the Probability that Neutral Events will be Recollected.

    Get PDF
    Although reward is known to enhance memory for reward-predicting events, the extent to which such memory effects spread to associated (neutral) events is unclear. Using a between-subject design, we examined how sharing a background context with rewarding events influenced memory for motivationally neutral events (tested after a 5 days delay). We found that sharing a visually rich context with rewarding objects during encoding increased the probability that neutral objects would be successfully recollected during memory test, as opposed to merely being recognized without any recall of associative detail. In contrast, such an effect was not seen when the context was not explicitly demarcated and objects were presented against a blank black background. These qualitative changes in memory were observed in the absence of any effects on overall recognition (as measured by d'). Additionally, a follow-up study failed to find any evidence to suggest that the mere presence of a context picture in the background during encoding (i.e., without the reward manipulation) produced any such qualitative changes in memory. These results suggest that reward enhances recollection for rewarding objects as well as other non-rewarding events that are representationally linked to the same context

    Closed-loop Bayesian Semantic Data Fusion for Collaborative Human-Autonomy Target Search

    Full text link
    In search applications, autonomous unmanned vehicles must be able to efficiently reacquire and localize mobile targets that can remain out of view for long periods of time in large spaces. As such, all available information sources must be actively leveraged -- including imprecise but readily available semantic observations provided by humans. To achieve this, this work develops and validates a novel collaborative human-machine sensing solution for dynamic target search. Our approach uses continuous partially observable Markov decision process (CPOMDP) planning to generate vehicle trajectories that optimally exploit imperfect detection data from onboard sensors, as well as semantic natural language observations that can be specifically requested from human sensors. The key innovation is a scalable hierarchical Gaussian mixture model formulation for efficiently solving CPOMDPs with semantic observations in continuous dynamic state spaces. The approach is demonstrated and validated with a real human-robot team engaged in dynamic indoor target search and capture scenarios on a custom testbed.Comment: Final version accepted and submitted to 2018 FUSION Conference (Cambridge, UK, July 2018
    • …
    corecore