96 research outputs found

    Correlated Components of Ongoing EEG Point to Emotionally Laden Attention – A Possible Marker of Engagement?

    Get PDF
    Recent evidence from functional magnetic resonance imaging suggests that cortical hemodynamic responses coincide in different subjects experiencing a common naturalistic stimulus. Here we utilize neural responses in the electroencephalogram (EEG) evoked by multiple presentations of short film clips to index brain states marked by high levels of correlation within and across subjects. We formulate a novel signal decomposition method which extracts maximally correlated signal components from multiple EEG records. The resulting components capture correlations down to a one-second time resolution, thus revealing that peak correlations of neural activity across viewings can occur in remarkable correspondence with arousing moments of the film. Moreover, a significant reduction in neural correlation occurs upon a second viewing of the film or when the narrative is disrupted by presenting its scenes scrambled in time. We also probe oscillatory brain activity during periods of heightened correlation, and observe during such times a significant increase in the theta band for a frontal component and reductions in the alpha and beta frequency bands for parietal and occipital components. Low-resolution EEG tomography of these components suggests that the correlated neural activity is consistent with sources in the cingulate and orbitofrontal cortices. Put together, these results suggest that the observed synchrony reflects attention- and emotion-modulated cortical processing which may be decoded with high temporal resolution by extracting maximally correlated components of neural activity

    Information-Theoretic Characterization of the Neural Mechanisms of Active Multisensory Decision Making

    Get PDF
    The signals delivered by different sensory modalities provide us with complementary information about the environment. A key component of interacting with the world is how to direct ones’ sensors so as to extract task-relevant information in order to optimize subsequent perceptual decisions. This process is often referred to as active sensing. Importantly, the processing of multisensory information acquired actively from multiple sensory modalities requires the interaction of multiple brain areas over time. Here we investigated the neural underpinnings of active visual-haptic integration during performance of a two-alternative forced choice (2AFC) reaction time (RT) task. We asked human subjects to discriminate the amplitude of two texture stimuli (a) using only visual (V) information, (b) using only haptic (H) information and (c) combining the two sensory cues (VH), while electroencephalograms (EEG) were recorded. To quantify multivariate interactions between EEG signals and active sensory experience in the three sensory conditions, we employed a novel information-theoretic methodology. This approach provides a principled way to quantify the contribution of each one of the sensory modalities to the perception of the stimulus and assess whether the respective neural representations may interact to form a percept of the stimulus and ultimately drive perceptual decisions. Application of this method to our data identified (a) an EEG component (comprising frontal and occipital electrodes) carrying behavioral information that is common to the two sensory inputs and (b) another EEG component (mainly motor) reflecting a synergistic representational interaction between the two sensory inputs. We suggest that the proposed approach can be used to elucidate the neural mechanisms underlying cross-modal interactions in active multisensory processing and decision-making

    Correlation of neural activity with behavioral kinematics reveals distinct sensory encoding and evidence accumulation processes during active tactile sensing

    Get PDF
    Many real-world decisions rely on active sensing, a dynamic process for directing our sensors (e.g. eyes or fingers) across a stimulus to maximize information gain. Though ecologically pervasive, limited work has focused on identifying neural correlates of the active sensing process. In tactile perception, we often make decisions about an object/surface by actively exploring its shape/texture. Here we investigate the neural correlates of active tactile decision-making by simultaneously measuring electroencephalography (EEG) and finger kinematics while subjects interrogated a haptic surface to make perceptual judgments. Since sensorimotor behavior underlies decision formation in active sensing tasks, we hypothesized that the neural correlates of decision-related processes would be detectable by relating active sensing to neural activity. Novel brain-behavior correlation analysis revealed that three distinct EEG components, localizing to right-lateralized occipital cortex (LOC), middle frontal gyrus (MFG), and supplementary motor area (SMA), respectively, were coupled with active sensing as their activity significantly correlated with finger kinematics. To probe the functional role of these components, we fit their single-trial-couplings to decision-making performance using a hierarchical-drift-diffusion-model (HDDM), revealing that the LOC modulated the encoding of the tactile stimulus whereas the MFG predicted the rate of information integration towards a choice. Interestingly, the MFG disappeared from components uncovered from control subjects performing active sensing but not required to make perceptual decisions. By uncovering the neural correlates of distinct stimulus encoding and evidence accumulation processes, this study delineated, for the first time, the functional role of cortical areas in active tactile decision-making

    Improving object segmentation by using EEG signals and rapid serial visual presentation

    Get PDF
    This paper extends our previous work on the potential of EEG-based brain computer interfaces to segment salient objects in images. The proposed system analyzes the Event Related Potentials (ERP) generated by the rapid serial visual presentation of windows on the image. The detection of the P300 signal allows estimating a saliency map of the image, which is used to seed a semi-supervised object segmentation algorithm. Thanks to the new contributions presented in this work, the average Jaccard index was improved from 0.470.47 to 0.660.66 when processed in our publicly available dataset of images, object masks and captured EEG signals. This work also studies alternative architectures to the original one, the impact of object occupation in each image window, and a more robust evaluation based on statistical analysis and a weighted F-score

    Towards the automated localisation of targets in rapid image-sifting by collaborative brain-computer interfaces

    Get PDF
    The N2pc is a lateralised Event-Related Potential (ERP) that signals a shift of attention towards the location of a potential object of interest. We propose a single-trial target-localisation collaborative Brain-Computer Interface (cBCI) that exploits this ERP to automatically approximate the horizontal position of targets in aerial images. Images were presented by means of the rapid serial visual presentation technique at rates of 5, 6 and 10 Hz. We created three different cBCIs and tested a participant selection method in which groups are formed according to the similarity of participants’ performance. The N2pc that is elicited in our experiments contains information about the position of the target along the horizontal axis. Moreover, combining information from multiple participants provides absolute median improvements in the area under the receiver operating characteristic curve of up to 21% (for groups of size 3) with respect to single-user BCIs. These improvements are bigger when groups are formed by participants with similar individual performance, and much of this effect can be explained using simple theoretical models. Our results suggest that BCIs for automated triaging can be improved by integrating two classification systems: one devoted to target detection and another to detect the attentional shifts associated with lateral targets

    A comparison of univariate, vector, bilinear autoregressive, and band power features for brain–computer interfaces

    Get PDF
    Selecting suitable feature types is crucial to obtain good overall brain–computer interface performance. Popular feature types include logarithmic band power (logBP), autoregressive (AR) parameters, time-domain parameters, and wavelet-based methods. In this study, we focused on different variants of AR models and compare performance with logBP features. In particular, we analyzed univariate, vector, and bilinear AR models. We used four-class motor imagery data from nine healthy users over two sessions. We used the first session to optimize parameters such as model order and frequency bands. We then evaluated optimized feature extraction methods on the unseen second session. We found that band power yields significantly higher classification accuracies than AR methods. However, we did not update the bias of the classifiers for the second session in our analysis procedure. When updating the bias at the beginning of a new session, we found no significant differences between all methods anymore. Furthermore, our results indicate that subject-specific optimization is not better than globally optimized parameters. The comparison within the AR methods showed that the vector model is significantly better than both univariate and bilinear variants. Finally, adding the prediction error variance to the feature space significantly improved classification results
    • …
    corecore