30,401 research outputs found

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Functional Organization of the Human Brain: How We See, Feel, and Decide.

    Get PDF
    The human brain is responsible for constructing how we perceive, think, and act in the world around us. The organization of these functions is intricately distributed throughout the brain. Here, I discuss how functional magnetic resonance imaging (fMRI) was employed to understand three broad questions: how do we see, feel, and decide? First, high-resolution fMRI was used to measure the polar angle representation of saccadic eye movements in the superior colliculus. We found that eye movements along the superior-inferior visual field are mapped across the medial-lateral anatomy of a subcortical midbrain structure, the superior colliculus (SC). This result is consistent with the topography in monkey SC. Second, we measured the empathic responses of the brain as people watched a hand get painfully stabbed with a needle. We found that if the hand was labeled as belonging to the same religion as the observer, the empathic neural response was heightened, creating a strong ingroup bias that could not be readily manipulated. Third, we measured brain activity in individuals as they made free decisions (i.e., choosing randomly which of two buttons to press) and found the activity within fronto-thalamic networks to be significantly decreased compared to being instructed (forced) to press a particular button. I also summarize findings from several other projects ranging from addiction therapies to decoding visual imagination to how corporations are represented as people. Together, these approaches illustrate how functional neuroimaging can be used to understand the organization of the human brain

    The perils of automaticity

    Get PDF
    Classical theories of skill acquisition propose that automatization (i.e., performance requires progressively less attention as experience is acquired) is a defining characteristic of expertise in a variety of domains (e.g., Fitts & Posner, 1967). Automaticity is believed to enhance smooth and efficient skill execution by allowing performers to focus on strategic elements of performance rather than on the mechanical details that govern task implementation (Williams & Ford, 2008). By contrast, conscious processing (i.e., paying conscious attention to one’s action during motor execution) has been found to disrupt skilled movement and performance proficiency (e.g., Beilock & Carr, 2001). On the basis of this evidence, researchers have tended to extol the virtues of automaticity. However, few researchers have considered the wide range of empirical evidence which indicates that highly automated behaviors can, on occasion, lead to a series of errors that may prove deleterious to skilled performance. Therefore, the purpose of the current paper is to highlight the perils, rather than the virtues, of automaticity. We draw on Reason’s (1990) classification scheme of everyday errors to show how an overreliance on automated procedures may lead to 3 specific performance errors (i.e., mistakes, slips, and lapses) in a variety of skill domains (e.g., sport, dance, music). We conclude by arguing that skilled performance requires the dynamic interplay of automatic processing and conscious processing in order to avoid performance errors and to meet the contextually contingent demands that characterize competitive environments in a range of skill domains

    The neural correlates of regulating another person's emotions: an exploratory fMRI study

    Get PDF
    Studies investigating the neurophysiological basis of intrapersonal emotion regulation (control of one's own emotional experience) report that the frontal cortex exerts a modulatory effect on limbic structures such as the amygdala and insula. However, no imaging study to date has examined the neurophysiological processes involved in interpersonal emotion regulation, where the goal is explicitly to regulate another person's emotion. Twenty healthy participants (10 males) underwent fMRI while regulating their own or another person's emotions. Intrapersonal and interpersonal emotion regulation tasks recruited an overlapping network of brain regions including bilateral lateral frontal cortex, pre-supplementary motor area, and left temporo-parietal junction. Activations unique to the interpersonal condition suggest that both affective (emotional simulation) and cognitive (mentalizing) aspects of empathy may be involved in the process of interpersonal emotion regulation. These findings provide an initial insight into the neural correlates of regulating another person's emotions and may be relevant to understanding mental health issues that involve problems with social interaction

    An Intelligent and Low-cost Eye-tracking System for Motorized Wheelchair Control

    Full text link
    In the 34 developed and 156 developing countries, there are about 132 million disabled people who need a wheelchair constituting 1.86% of the world population. Moreover, there are millions of people suffering from diseases related to motor disabilities, which cause inability to produce controlled movement in any of the limbs or even head.The paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly without having to rely on others utilizing an eye-controlled electric wheelchair. The system input was images of the users eye that were processed to estimate the gaze direction and the wheelchair was moved accordingly. To accomplish such a feat, four user-specific methods were developed, implemented and tested; all of which were based on a benchmark database created by the authors.The first three techniques were automatic, employ correlation and were variants of template matching, while the last one uses convolutional neural networks (CNNs). Different metrics to quantitatively evaluate the performance of each algorithm in terms of accuracy and latency were computed and overall comparison is presented. CNN exhibited the best performance (i.e. 99.3% classification accuracy), and thus it was the model of choice for the gaze estimator, which commands the wheelchair motion. The system was evaluated carefully on 8 subjects achieving 99% accuracy in changing illumination conditions outdoor and indoor. This required modifying a motorized wheelchair to adapt it to the predictions output by the gaze estimation algorithm. The wheelchair control can bypass any decision made by the gaze estimator and immediately halt its motion with the help of an array of proximity sensors, if the measured distance goes below a well-defined safety margin.Comment: Accepted for publication in Sensor, 19 Figure, 3 Table

    Action perception as hypothesis testing

    Get PDF
    We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions – and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing

    Flexible recruitment of cortical networks in visual and auditory attention

    Full text link
    Our senses, while limited, shape our perception of the world and contribute to the functional architecture of the brain. This dissertation investigates the role of sensory modality and task demands in the cortical organization of healthy human adults using functional magnetic resonance imaging (fMRI). This research provides evidence for sensory modality bias in frontal cortical regions by directly contrasting auditory and visual sustained attention. This contrast revealed two distinct visual-biased regions in lateral frontal cortex - superior and inferior precentral sulcus (sPCS, iPCS) - anatomically interleaved with two auditory-biased regions - transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic (resting-state) functional connectivity analysis demonstrated that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Unisensory (auditory or visual) short-term memory (STM) tasks assessed the flexible recruitment of these sensory-biased cortical regions by varying information domain demands (e.g., spatial, temporal). While both modalities provide spatial and temporal information, vision has greater spatial resolution than audition, and audition has excellent temporal precision relative to vision. A visual temporal, but not a spatial, STM task flexibly recruited frontal auditory-biased regions; conversely, an auditory spatial task more strongly recruited frontal visual-biased regions compared to an auditory temporal task. This flexible recruitment extended to an auditory-biased superior temporal lobe region and to a subset of visual-biased parietal regions. A demanding auditory spatial STM task recruited anterior/superior visuotopic maps (IPS2-4, SPL1) along the intraparietal sulcus, but neither spatial nor temporal auditory tasks recruited posterior/interior maps. Finally, a comparison of visual spatial attention and STM under varied cognitive load demands attempted to further elucidate the organization of posterior parietal cortex. Parietal visuotopic maps were recruited for both visual spatial attention and working memory but demonstrated a graded response to task demands. Posterior/inferior maps (IPS0-1) demonstrated a linear relationship with the number of items attended to or remembered in the visual spatial tasks. Anterior/superior maps (IPS2-4, SPL1) demonstrated a general recruitment in visual spatial cognitive tasks, with a stronger response for visual spatial attention compared to STM
    corecore