1,152 research outputs found

    Detection of intention level in response to task difficulty from EEG signals

    Get PDF
    We present an approach that enables detecting intention levels of subjects in response to task difficulty utilizing an electroencephalogram (EEG) based brain-computer interface (BCI). In particular, we use linear discriminant analysis (LDA) to classify event-related synchronization (ERS) and desynchronization (ERD) patterns associated with right elbow flexion and extension movements, while lifting different weights. We observe that it is possible to classify tasks of varying difficulty based on EEG signals. Additionally, we also present a correlation analysis between intention levels detected from EEG and surface electromyogram (sEMG) signals. Our experimental results suggest that it is possible to extract the intention level information from EEG signals in response to task difficulty and indicate some level of correlation between EEG and EMG. With a view towards detecting patients' intention levels during rehabilitation therapies, the proposed approach has the potential to ensure active involvement of patients throughout exercise routines and increase the efficacy of robot assisted therapies

    Biasing the perception of ambiguous vocal affect: a TMS study on frontal asymmetry

    Get PDF
    Several sources of evidence point toward a link between asymmetry of prefrontal brain activity and approach–withdrawal tendencies. Here, we tested the causal nature of this link and examined if the categorization of an ambiguous approach- or withdrawal-related vocal signal can be biased by manipulating left and right frontal neural activity. We used voice morphing of affective non-verbal vocalizations to create individually tailored affectively ambiguous stimuli on an Anger–Fear continuum—two emotions that represent extremes on the approach–withdrawal dimension. We tested perception of these stimuli after 10 min of low-frequency repetitive transcranial magnetic stimulation over left or right dorsolateral prefrontal cortex or over the vertex (control), a technique that has transient inhibitory effects on the targeted brain region. As expected, ambiguous stimuli were more likely perceived as expressing Anger (approach) than Fear (withdrawal) after right prefrontal compared with left prefrontal or control stimulation. These results provide the first evidence that the manipulation of asymmetrical activity in prefrontal cortex can change the explicit categorization of ambiguous emotional signals

    Emotion Recognition from EEG Signal Focusing on Deep Learning and Shallow Learning Techniques

    Get PDF
    Recently, electroencephalogram-based emotion recognition has become crucial in enabling the Human-Computer Interaction (HCI) system to become more intelligent. Due to the outstanding applications of emotion recognition, e.g., person-based decision making, mind-machine interfacing, cognitive interaction, affect detection, feeling detection, etc., emotion recognition has become successful in attracting the recent hype of AI-empowered research. Therefore, numerous studies have been conducted driven by a range of approaches, which demand a systematic review of methodologies used for this task with their feature sets and techniques. It will facilitate the beginners as guidance towards composing an effective emotion recognition system. In this article, we have conducted a rigorous review on the state-of-the-art emotion recognition systems, published in recent literature, and summarized some of the common emotion recognition steps with relevant definitions, theories, and analyses to provide key knowledge to develop a proper framework. Moreover, studies included here were dichotomized based on two categories: i) deep learning-based, and ii) shallow machine learning-based emotion recognition systems. The reviewed systems were compared based on methods, classifier, the number of classified emotions, accuracy, and dataset used. An informative comparison, recent research trends, and some recommendations are also provided for future research directions

    EmoEEG - recognising people's emotions using electroencephalography

    Get PDF
    Tese de mestrado integrado em Engenharia BiomĂ©dica e BiofĂ­sica (Sinais e Imagens MĂ©dicas), Universidade de Lisboa, Faculdade de CiĂȘncias, 2020As emoçÔes desempenham um papel fulcral na vida humana, estando envolvidas numa extensa variedade de processos cognitivos, tais como tomada de decisĂŁo, perceção, interaçÔes sociais e inteligĂȘncia. As interfaces cĂ©rebro-mĂĄquina (ICM) sĂŁo sistemas que convertem os padrĂ”es de atividade cerebral de um utilizador em mensagens ou comandos para uma determinada aplicação. Os usos mais comuns desta tecnologia permitem que pessoas com deficiĂȘncia motora controlem braços mecĂąnicos, cadeiras de rodas ou escrevam. Contudo, tambĂ©m Ă© possĂ­vel utilizar tecnologias ICM para gerar output sem qualquer controle voluntĂĄrio. A identificação de estados emocionais Ă© um exemplo desse tipo de feedback. Por sua vez, esta tecnologia pode ter aplicaçÔes clĂ­nicas tais como a identificação e monitorização de patologias psicolĂłgicas, ou aplicaçÔes multimĂ©dia que facilitem o acesso a mĂșsicas ou filmes de acordo com o seu conteĂșdo afetivo. O interesse crescente em estabelecer interaçÔes emocionais entre mĂĄquinas e pessoas, levou Ă  necessidade de encontrar mĂ©todos fidedignos de reconhecimento emocional automĂĄtico. Os autorrelatos podem nĂŁo ser confiĂĄveis devido Ă  natureza subjetiva das prĂłprias emoçÔes, mas tambĂ©m porque os participantes podem responder de acordo com o que acreditam que os outros responderiam. A fala emocional Ă© uma maneira eficaz de deduzir o estado emocional de uma pessoa, pois muitas caracterĂ­sticas da fala sĂŁo independentes da semĂąntica ou da cultura. No entanto, a precisĂŁo ainda Ă© insuficiente quando comparada com outros mĂ©todos, como a anĂĄlise de expressĂ”es faciais ou sinais fisiolĂłgicos. Embora o primeiro jĂĄ tenha sido usado para identificar emoçÔes com sucesso, ele apresenta desvantagens, tais como o fato de muitas expressĂ”es faciais serem "forçadas" e o fato de que as leituras sĂł sĂŁo possĂ­veis quando o rosto do sujeito estĂĄ dentro de um Ăąngulo muito especĂ­fico em relação Ă  cĂąmara. Por estes motivos, a recolha de sinais fisiolĂłgicos tem sido o mĂ©todo preferencial para o reconhecimento de emoçÔes. O uso do EEG (eletroencefalograma) permite-nos monitorizar as emoçÔes sentidas sob a forma de impulsos elĂ©tricos provenientes do cĂ©rebro, permitindo assim obter uma ICM para o reconhecimento afetivo. O principal objetivo deste trabalho foi estudar a combinação de diferentes elementos para identificar estados afetivos, estimando valores de valĂȘncia e ativação usando sinais de EEG. A anĂĄlise realizada consistiu na criação de vĂĄrios modelos de regressĂŁo para avaliar como diferentes elementos afetam a precisĂŁo na estimativa de valĂȘncia e ativação. Os referidos elementos foram os mĂ©todos de aprendizagem automĂĄtica, o gĂ©nero do indivĂ­duo, o conceito de assimetria cerebral, os canais de elĂ©trodos utilizados, os algoritmos de extração de caracterĂ­sticas e as bandas de frequĂȘncias analisadas. Com esta anĂĄlise foi possĂ­vel criarmos o melhor modelo possĂ­vel, com a combinação de elementos que maximiza a sua precisĂŁo. Para alcançar os nossos objetivos, recorremos a duas bases de dados (AMIGOS e DEAP) contendo sinais de EEG obtidos durante experiĂȘncias de desencadeamento emocional, juntamente com a autoavaliação realizada pelos respetivos participantes. Nestas experiĂȘncias, os participantes visionaram excertos de vĂ­deos de conteĂșdo afetivo, de modo a despoletar emoçÔes sobre eles, e depois classificaram-nas atribuindo o nĂ­vel de valĂȘncia e ativação experienciado. Os sinais EEG obtidos foram divididos em epochs de 4s e de seguida procedeu-se Ă  extração de caracterĂ­sticas atravĂ©s de diferentes algoritmos: o primeiro, segundo e terceiro parĂąmetros de Hjorth; entropia espectral; energia e entropia de wavelets; energia e entropia de FMI (funçÔes de modos empĂ­ricos) obtidas atravĂ©s da transformada de Hilbert-Huang. Estes mĂ©todos de processamento de sinal foram escolhidos por jĂĄ terem gerado resultados bons noutros trabalhos relacionados. Todos estes mĂ©todos foram aplicados aos sinais EEG dentro das bandas de frequĂȘncia alfa, beta e gama, que tambĂ©m produziram bons resultados de acordo com trabalhos jĂĄ efetuados. ApĂłs a extração de caracterĂ­sticas dos sinais EEG, procedeu-se Ă  criação de diversos modelos de estimação da valĂȘncia e ativação usando as autoavaliaçÔes dos participantes como “verdade fundamental”. O primeiro conjunto de modelos criados serviu para aferir quais os melhores mĂ©todos de aprendizagem automĂĄtica a utilizar para os testes vindouros. ApĂłs escolher os dois melhores, tentĂĄmos verificar as diferenças no processamento emocional entre os sexos, realizando a estimativa em homens e mulheres separadamente. O conjunto de modelos criados a seguir visou testar o conceito da assimetria cerebral, que afirma que a valĂȘncia emocional estĂĄ relacionada com diferenças na atividade fisiolĂłgica entre os dois hemisfĂ©rios cerebrais. Para este teste especĂ­fico, foram consideradas a assimetria diferencial e racional segundo pares de elĂ©trodos homĂłlogos. Depois disso, foram criados modelos de estimação de valĂȘncia e ativação considerando cada um dos elĂ©trodos individualmente. Ou seja, os modelos seriam gerados com todos os mĂ©todos de extração de caracterĂ­sticas, mas com os dados obtidos de um elĂ©trodo apenas. Depois foram criados modelos que visassem comparar cada um dos algoritmos de extração de caracterĂ­sticas utilizados. Os modelos gerados nesta fase incluĂ­ram os dados obtidos de todos os elĂ©trodos, jĂĄ que anteriormente se verificou que nĂŁo haviam elĂ©trodos significativamente melhores que outros. Por fim, procedeu-se Ă  criação dos modelos com a melhor combinação de elementos possĂ­vel, otimizaram-se os parĂąmetros dos mesmos, e procurĂĄmos tambĂ©m aferir a sua validação. RealizĂĄmos tambĂ©m um processo de classificação emocional associando cada par estimado de valores de valĂȘncia e ativação ao quadrante correspondente no modelo circumplexo de afeto. Este Ășltimo passo foi necessĂĄrio para conseguirmos comparar o nosso trabalho com as soluçÔes existentes, pois a grande maioria delas apenas identificam o quadrante emocional, nĂŁo estimando valores para a valĂȘncia e ativação. Em suma, os melhores mĂ©todos de aprendizagem automĂĄtica foram RF (random forest) e KNN (k-nearest neighbours), embora a combinação dos melhores mĂ©todos de extração de caracterĂ­sticas fosse diferente para os dois. KNN apresentava melhor precisĂŁo considerando todos os mĂ©todos de extração menos a entropia espectral, enquanto que RF foi mais preciso considerando apenas o primeiro parĂąmetro de Hjorth e a energia de wavelets. Os valores dos coeficientes de Pearson obtidos para os melhores modelos otimizados ficaram compreendidos entre 0,8 e 0,9 (sendo 1 o valor mĂĄximo). NĂŁo foram registados melhoramentos nos resultados considerando cada gĂ©nero individualmente, pelo que os modelos finais foram criados usando os dados de todos os participantes. É possĂ­vel que a diminuição da precisĂŁo dos modelos criados para cada gĂ©nero seja resultado da menor quantidade de dados envolvidos no processo de treino. O conceito de assimetria cerebral sĂł foi Ăștil nos modelos criados usando a base de dados DEAP, especialmente para a estimação de valĂȘncia usando as caracterĂ­sticas extraĂ­das segundo a banda alfa. Em geral, as nossas abordagens mostraram-se a par ou mesmo superiores a outros trabalhos, obtendo-se valores de acurĂĄcia de 86.5% para o melhor modelo de classificação gerado com a base de dados AMIGOS e 86.6% usando a base de dados DEAP.Emotion recognition is a field within affective computing that is gaining increasing relevance and strives to predict an emotional state using physiological signals. Understanding how these biological factors are expressed according to one’s emotions can enhance the humancomputer interaction (HCI). This knowledge, can then be used for clinical applications such as the identification and monitoring of psychiatric disorders. It can also be used to provide better access to multimedia content, by assigning affective tags to videos or music. The goal of this work was to create several models for estimating values of valence and arousal, using features extracted from EEG signals. The different models created were meant to compare how various elements affected the accuracy of the model created. These elements were the machine learning techniques, the gender of the individual, the brain asymmetry concept, the electrode channels, the feature extraction methods and the frequency of the brain waves analysed. The final models contained the best combination of these elements and achieved PCC values over 0.80. As a way to compare our work with previous approaches, we also implemented a classification procedure to find the correspondent quadrant in the valence and arousal space according to the circumplex model of affect. The best accuracies achieved were over 86%, which was on par or even superior to some of the works already done

    Optimal set of EEG features for emotional state classification and trajectory visualization in Parkinson's disease

    Get PDF
    In addition to classic motor signs and symptoms, individuals with Parkinson's disease (PD) are characterized by emotional deficits. Ongoing brain activity can be recorded by electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study utilized machine-learning algorithms to categorize emotional states in PD patients compared with healthy controls (HC) using EEG. Twenty non-demented PD patients and 20 healthy age-, gender-, and education level-matched controls viewed happiness, sadness, fear, anger, surprise, and disgust emotional stimuli while fourteen-channel EEG was being recorded. Multimodal stimulus (combination of audio and visual) was used to evoke the emotions. To classify the EEG-based emotional states and visualize the changes of emotional states over time, this paper compares four kinds of EEG features for emotional state classification and proposes an approach to track the trajectory of emotion changes with manifold learning. From the experimental results using our EEG data set, we found that (a) bispectrum feature is superior to other three kinds of features, namely power spectrum, wavelet packet and nonlinear dynamical analysis; (b) higher frequency bands (alpha, beta and gamma) play a more important role in emotion activities than lower frequency bands (delta and theta) in both groups and; (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning. This provides a promising way of implementing visualization of patient's emotional state in real time and leads to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications
    • 

    corecore