21 research outputs found

    Emotion Recognition With Temporarily Localized 'Emotional Events' in Naturalistic Context

    Full text link
    Emotion recognition using EEG signals is an emerging area of research due to its broad applicability in BCI. Emotional feelings are hard to stimulate in the lab. Emotions do not last long, yet they need enough context to be perceived and felt. However, most EEG-related emotion databases either suffer from emotionally irrelevant details (due to prolonged duration stimulus) or have minimal context doubting the feeling of any emotion using the stimulus. We tried to reduce the impact of this trade-off by designing an experiment in which participants are free to report their emotional feelings simultaneously watching the emotional stimulus. We called these reported emotional feelings "Emotional Events" in our Dataset on Emotion with Naturalistic Stimuli (DENS). We used EEG signals to classify emotional events on different combinations of Valence(V) and Arousal(A) dimensions and compared the results with benchmark datasets of DEAP and SEED. STFT is used for feature extraction and used in the classification model consisting of CNN-LSTM hybrid layers. We achieved significantly higher accuracy with our data compared to DEEP and SEED data. We conclude that having precise information about emotional feelings improves the classification accuracy compared to long-duration EEG signals which might be contaminated by mind-wandering

    ACRF: Aggregated Conditional Random Field for Out of Vocab (OOV) Token Representation for Hindi NER

    No full text
    Named entities are random, like emerging entities and complex entities. Most of the large language model’s tokenizers have fixed vocab; hence, they tokenize out-of-vocab (OOV) words into multiple sub-words during tokenization. During fine-tuning for any downstream task, these sub-words (tokens) make the named entity classification more complex since, for each sub-word, an extra entity type is assigned for utilizing the word embedding of the sub-word. This work attempts to reduce this complexity by aggregating token embeddings of each word. In this work, we have applied Aggregated-CRF (ACRF), where a conditional random field (CRF) is applied at the top of aggregated token embeddings for named entity prediction. Aggregation is done at embeddings of all tokens generated by a tokenizer corresponding to a word. The experiment was done with two Hindi datasets (HiNER and Hindi Multiconer2). This work showed that the ACRF is better than vanilla CRF (where token embeddings are not aggregated). Also, our result outperformed the existing best result at HiNER data, which was done by applying a cross-entropy classification layer. Further, An analysis of the impact of tokenization has been conducted, both generally and according to entity types for each word present in test data, and the results show that ACRF performed better for the words which tokenized in more than one sub-words (OOV) compared to vanilla CRF. In addition, this work conducts a comparative analysis between two transformer-based models, MuRIL-large and XLM-roberta-large and investigates how these models adopt aggregation strategy based on OOV

    Speech, image, and language processing for human computer interaction : multi-modal advancements / Uma Shanker Tiwary and Tanveer J. Siddiqui, editors.

    No full text
    Includes bibliographical references and index.Book fair 2013.xiii, 372 p. :Human Computer Interaction is the study of relationships among people and computers. As the digital world is getting multi-modal, the information space is getting more and more complex. In order to navigate this information space and to capture and apply this information to appropriate use, an effective interaction between human and computer is required. Such interactions are only possible if computers can understand and respond to important modalities of human interaction. Speech, Image, and Language Processing for Human Computer Interaction aims to indentify the emerging research areas in Human Computer Interaction and discusses the current state of the arts in these areas. This collection of knowledge includes the basic concepts and technologies in language, as well as future developments in this area. This volume will serve as a reference for researchers and students alike to broaden their knowledge of state-of-the-art HCI

    Cardiac–Brain Dynamics Depend on Context Familiarity and Their Interaction Predicts Experience of Emotional Arousal

    No full text
    Our brain continuously interacts with the body as we engage with the world. Although we are mostly unaware of internal bodily processes, such as our heartbeats, they may be influenced by and in turn influence our perception and emotional feelings. Although there is a recent focus on understanding cardiac interoceptive activity and interaction with brain activity during emotion processing, the investigation of cardiac–brain interactions with more ecologically valid naturalistic emotional stimuli is still very limited. We also do not understand how an essential aspect of emotions, such as context familiarity, influences affective feelings and is linked to statistical interaction between cardiac and brain activity. Hence, to answer these questions, we designed an exploratory study by recording ECG and EEG signals for the emotional events while participants were watching emotional movie clips. Participants also rated their familiarity with the stimulus on the familiarity scale. Linear mixed effect modelling was performed in which the ECG power and familiarity were considered as predictors of EEG power. We focused on three brain regions, including prefrontal (PF), frontocentral (FC) and parietooccipital (PO). The analyses showed that the interaction between the power of cardiac activity in the mid-frequency range and the power in specific EEG bands is dependent on familiarity, such that the interaction is stronger with high familiarity. In addition, the results indicate that arousal is predicted by cardiac–brain interaction, which also depends on familiarity. The results support emotional theories that emphasize context dependency and interoception. Multimodal studies with more realistic stimuli would further enable us to understand and predict different aspects of emotional experience

    Dynamic Functional Connectivity of Emotion Processing in Beta Band with Naturalistic Emotion Stimuli

    No full text
    While naturalistic stimuli, such as movies, better represent the complexity of the real world and are perhaps crucial to understanding the dynamics of emotion processing, there is limited research on emotions with naturalistic stimuli. There is a need to understand the temporal dynamics of emotion processing and their relationship to different dimensions of emotion experience. In addition, there is a need to understand the dynamics of functional connectivity underlying different emotional experiences that occur during or prior to such experiences. To address these questions, we recorded the EEG of participants and asked them to mark the temporal location of their emotional experience as they watched a video. We also obtained self-assessment ratings for emotional multimedia stimuli. We calculated dynamic functional the connectivity (DFC) patterns in all the frequency bands, including information about hubs in the network. The change in functional networks was quantified in terms of temporal variability, which was then used in regression analysis to evaluate whether temporal variability in DFC (tvDFC) could predict different dimensions of emotional experience. We observed that the connectivity patterns in the upper beta band could differentiate emotion categories better during or prior to the reported emotional experience. The temporal variability in functional connectivity dynamics is primarily related to emotional arousal followed by dominance. The hubs in the functional networks were found across the right frontal and bilateral parietal lobes, which have been reported to facilitate affect, interoception, action, and memory-related processing. Since our study was performed with naturalistic real-life resembling emotional videos, the study contributes significantly to understanding the dynamics of emotion processing. The results support constructivist theories of emotional experience and show that changes in dynamic functional connectivity can predict aspects of our emotional experience

    Films

    No full text
    corecore