7,190 research outputs found

    Multi-modal Approach for Affective Computing

    Full text link
    Throughout the past decade, many studies have classified human emotions using only a single sensing modality such as face video, electroencephalogram (EEG), electrocardiogram (ECG), galvanic skin response (GSR), etc. The results of these studies are constrained by the limitations of these modalities such as the absence of physiological biomarkers in the face-video analysis, poor spatial resolution in EEG, poor temporal resolution of the GSR etc. Scant research has been conducted to compare the merits of these modalities and understand how to best use them individually and jointly. Using multi-modal AMIGOS dataset, this study compares the performance of human emotion classification using multiple computational approaches applied to face videos and various bio-sensing modalities. Using a novel method for compensating physiological baseline we show an increase in the classification accuracy of various approaches that we use. Finally, we present a multi-modal emotion-classification approach in the domain of affective computing research.Comment: Published in IEEE 40th International Engineering in Medicine and Biology Conference (EMBC) 201

    On the neural origin of pseudoneglect: EEG-correlates of shifts in line bisection performance with manipulation of line length

    Get PDF
    Healthy participants tend to show systematic biases in spatial attention, usually to the left. However, these biases can shift rightward as a result of a number of experimental manipulations. Using electroencephalography (EEG) and a computerized line bisection task, here we investigated for the first time the neural correlates of changes in spatial attention bias induced by line-length (the so-called line-length effect). In accordance with previous studies, an overall systematic left bias (pseudoneglect) was present during long line but not during short line bisection performance. This effect of line-length on behavioral bias was associated with stronger right parieto-occipital responses to long as compared to short lines in an early time window (100–200 ms) post-stimulus onset. This early differential activation to long as compared to short lines was task-independent (present even in a non-spatial control task not requiring line bisection), suggesting that it reflects a reflexive attentional response to long lines. This was corroborated by further analyses source-localizing the line-length effect to the right temporo-parietal junction (TPJ) and revealing a positive correlation between the strength of this effect and the magnitude by which long lines (relative to short lines) drive a behavioral left bias across individuals. Therefore, stimulus-driven left bisection bias was associated with increased right hemispheric engagement of areas of the ventral attention network. This further substantiates that this network plays a key role in the genesis of spatial bias, and suggests that post-stimulus TPJ-activity at early information processing stages (around the latency of the N1 component) contributes to the left bias

    ICLabel: An automated electroencephalographic independent component classifier, dataset, and website

    Full text link
    The electroencephalogram (EEG) provides a non-invasive, minimally restrictive, and relatively low cost measure of mesoscale brain dynamics with high temporal resolution. Although signals recorded in parallel by multiple, near-adjacent EEG scalp electrode channels are highly-correlated and combine signals from many different sources, biological and non-biological, independent component analysis (ICA) has been shown to isolate the various source generator processes underlying those recordings. Independent components (IC) found by ICA decomposition can be manually inspected, selected, and interpreted, but doing so requires both time and practice as ICs have no particular order or intrinsic interpretations and therefore require further study of their properties. Alternatively, sufficiently-accurate automated IC classifiers can be used to classify ICs into broad source categories, speeding the analysis of EEG studies with many subjects and enabling the use of ICA decomposition in near-real-time applications. While many such classifiers have been proposed recently, this work presents the ICLabel project comprised of (1) an IC dataset containing spatiotemporal measures for over 200,000 ICs from more than 6,000 EEG recordings, (2) a website for collecting crowdsourced IC labels and educating EEG researchers and practitioners about IC interpretation, and (3) the automated ICLabel classifier. The classifier improves upon existing methods in two ways: by improving the accuracy of the computed label estimates and by enhancing its computational efficiency. The ICLabel classifier outperforms or performs comparably to the previous best publicly available method for all measured IC categories while computing those labels ten times faster than that classifier as shown in a rigorous comparison against all other publicly available EEG IC classifiers.Comment: Intended for NeuroImage. Updated from version one with minor editorial and figure change

    Dynamic Construction of Stimulus Values in the Ventromedial Prefrontal Cortex

    Get PDF
    Signals representing the value assigned to stimuli at the time of choice have been repeatedly observed in ventromedial prefrontal cortex (vmPFC). Yet it remains unknown how these value representations are computed from sensory and memory representations in more posterior brain regions. We used electroencephalography (EEG) while subjects evaluated appetitive and aversive food items to study how event-related responses modulated by stimulus value evolve over time. We found that value-related activity shifted from posterior to anterior, and from parietal to central to frontal sensors, across three major time windows after stimulus onset: 150–250 ms, 400–550 ms, and 700–800 ms. Exploratory localization of the EEG signal revealed a shifting network of activity moving from sensory and memory structures to areas associated with value coding, with stimulus value activity localized to vmPFC only from 400 ms onwards. Consistent with these results, functional connectivity analyses also showed a causal flow of information from temporal cortex to vmPFC. Thus, although value signals are present as early as 150 ms after stimulus onset, the value signals in vmPFC appear relatively late in the choice process, and seem to reflect the integration of incoming information from sensory and memory related regions
    • …
    corecore