7,660 research outputs found

    Within-trial effects of stimulus-reward associations

    Get PDF
    While a globally energizing influence of motivation has long been appreciated in psychological research, a series of more recent studies has described motivational influences on specific cognitive operations ranging from visual attention, to cognitive control, to memory formation. In the majority of these studies, a cue predicts the potential to win money in a subsequent task, thus allowing for modulations of proactive task preparation. Here we describe some recent studies using tasks that communicate reward availability without such cues by directly associating specific task features with reward. Despite abolishing the cue-based preparation phase, these studies show similar performance benefits. Given the clear difference in temporal structure, a central question is how these behavioral effects are brought about, and in particular whether control processes can rapidly be enhanced reactively. We present some evidence in favor of this notion. Although additional influences, for example sensory prioritization of reward-related features, could contribute to the reward-related performance benefits, those benefits seem to strongly rely on enhancements of control processes during task execution. Still, for a better mechanistic understanding of reward benefits in these two principal paradigms (cues vs. no cues), more work is needed that directly compares the underlying processes. We anticipate that reward benefits can be brought about in a very flexible fashion depending on the exact nature of the reward manipulation and task, and that a better understanding of these processes will not only be relevant for basic motivation research, but that it can also be valuable for educational and psychopathological contexts

    Reactive and proactive cognitive control

    Get PDF

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care

    A Novel Analysis of Performance Classification and Workload Prediction Using Electroencephalography (EEG) Frequency Data

    Get PDF
    Across the DOD each task an operator is presented with has some level of difficulty associated with it. This level of difficulty over the course of the task is also known as workload, where the operator is faced with varying levels of workload as he or she attempts to complete the task. The focus of the research presented in this thesis is to determine if those changes in workload can be predicted and to determine if individuals can be classified based on performance in order to prevent an increase in workload that would cause a decline in performance in a given task. Despite many efforts to predict workload and classify individuals with machine learning, the classification and predictive ability of Electroencephalography (EEG) frequency data has not been explored at the individual EEG Frequency band level. In a 711th HPW/RCHP Human Universal Measurement and Assessment Network (HUMAN) Lab study, 14 Subjects were asked to complete two tasks over 16 scenarios, while their physiological data, including EEG frequency data, was recorded to capture the physiological changes their body went through over the course of the experiment. The research presented in this thesis focuses on EEG frequency data, and its ability to predict task performance and changes in workload. Several machine learning techniques are explored in this thesis before a final technique was chosen. This thesis contributes research to the medical and machine learning fields regarding the classification and workload prediction efficacy of EEG frequency data. Specifically, it presents a novel investigation of five EEG frequencies and their individual abilities to predict task performance and workload. It was discovered that using the Gamma EEG frequency and all EEG frequencies combined to predict task performance resulted in average classification accuracies of greater than 90%

    Investigating the Neural Basis of Audiovisual Speech Perception with Intracranial Recordings in Humans

    Get PDF
    Speech is inherently multisensory, containing auditory information from the voice and visual information from the mouth movements of the talker. Hearing the voice is usually sufficient to understand speech, however in noisy environments or when audition is impaired due to aging or disabilities, seeing mouth movements greatly improves speech perception. Although behavioral studies have well established this perceptual benefit, it is still not clear how the brain processes visual information from mouth movements to improve speech perception. To clarify this issue, I studied the neural activity recorded from the brain surfaces of human subjects using intracranial electrodes, a technique known as electrocorticography (ECoG). First, I studied responses to noisy speech in the auditory cortex, specifically in the superior temporal gyrus (STG). Previous studies identified the anterior parts of the STG as unisensory, responding only to auditory stimulus. On the other hand, posterior parts of the STG are known to be multisensory, responding to both auditory and visual stimuli, which makes it a key region for audiovisual speech perception. I examined how these different parts of the STG respond to clear versus noisy speech. I found that noisy speech decreased the amplitude and increased the across-trial variability of the response in the anterior STG. However, possibly due to its multisensory composition, posterior STG was not as sensitive to auditory noise as the anterior STG and responded similarly to clear and noisy speech. I also found that these two response patterns in the STG were separated by a sharp boundary demarcated by the posterior-most portion of the Heschl’s gyrus. Second, I studied responses to silent speech in the visual cortex. Previous studies demonstrated that visual cortex shows response enhancement when the auditory component of speech is noisy or absent, however it was not clear which regions of the visual cortex specifically show this response enhancement and whether this response enhancement is a result of top-down modulation from a higher region. To test this, I first mapped the receptive fields of different regions in the visual cortex and then measured their responses to visual (silent) and audiovisual speech stimuli. I found that visual regions that have central receptive fields show greater response enhancement to visual speech, possibly because these regions receive more visual information from mouth movements. I found similar response enhancement to visual speech in frontal cortex, specifically in the inferior frontal gyrus, premotor and dorsolateral prefrontal cortices, which have been implicated in speech reading in previous studies. I showed that these frontal regions display strong functional connectivity with visual regions that have central receptive fields during speech perception
    • …
    corecore