940 research outputs found

    Human brain distinctiveness based on EEG spectral coherence connectivity

    Full text link
    The use of EEG biometrics, for the purpose of automatic people recognition, has received increasing attention in the recent years. Most of current analysis rely on the extraction of features characterizing the activity of single brain regions, like power-spectrum estimates, thus neglecting possible temporal dependencies between the generated EEG signals. However, important physiological information can be extracted from the way different brain regions are functionally coupled. In this study, we propose a novel approach that fuses spectral coherencebased connectivity between different brain regions as a possibly viable biometric feature. The proposed approach is tested on a large dataset of subjects (N=108) during eyes-closed (EC) and eyes-open (EO) resting state conditions. The obtained recognition performances show that using brain connectivity leads to higher distinctiveness with respect to power-spectrum measurements, in both the experimental conditions. Notably, a 100% recognition accuracy is obtained in EC and EO when integrating functional connectivity between regions in the frontal lobe, while a lower 97.41% is obtained in EC (96.26% in EO) when fusing power spectrum information from centro-parietal regions. Taken together, these results suggest that functional connectivity patterns represent effective features for improving EEG-based biometric systems.Comment: Key words: EEG, Resting state, Biometrics, Spectral coherence, Match score fusio

    Intelligent computational techniques and virtual environment for understanding cerebral visual impairment patients

    Get PDF
    Cerebral Visual Impairment (CVI) is a medical area that concerns the study of the effect of brain damages on the visual field (VF). People with CVI are not able to construct a perfect 3-Dimensional view of what they see through their eyes in their brain. Therefore, they have difficulties in their mobility and behaviours that others find hard to understand due to their visual impairment. A branch of Artificial Intelligence (AI) is the simulation of behaviour by building computational models that help to explain how people solve problems or why they behave in a certain way. This project describes a novel intelligent system that simulates the navigation problems faced by people with CVI. This will help relatives, friends, and ophthalmologists of CVI patients understand more about their difficulties in navigating their everyday environment. The navigation simulation system is implemented using the Unity3D game engine. Virtual scenes of different living environments are also created using the Unity modelling software. The vision of the avatar in the virtual environment is implemented using a camera provided by the 3D game engine. Given a visual field chart of a CVI patient with visual impairment, the system automatically creates a filter (mask) that mimics a visual defect and places it in front of the visual field of the avatar. The filters are created by extracting, classifying and converting the symbols of the defected areas in the visual field chart to numerical values and then converted to textures to mask the vision. Each numeric value represents a level of transparency and opacity according to the severity of the visual defect in that region. The filters represent the vision masks. Unity3D supports physical properties to facilitate the representation of the VF defects into a form of structures of rays. The length of each ray depends on the VF defect s numeric value. Such that, the greater values (means a greater percentage of opacity) represented by short rays in length. While the smaller values (means a greater percentage of transparency) represented by longer rays. The lengths of all rays are representing the vision map (how far the patient can see). Algorithms for navigation based on the generated rays have been developed to enable the avatar to move around in given virtual environments. The avatar depends on the generated vision map and will exhibit different behaviours to simulate the navigation problem of real patients. The avatar s behaviour of navigation differs from patient to another according to their different defects. An experiment of navigating virtual environments (scenes) using the HTC Oculus Vive Headset was conducted using different scenarios. The scenarios are designed to use different VF defects within different scenes. The experiment simulates the patient s navigation in virtual environments with static objects (rooms) and in virtual environments with moving objects. The behaviours of the experiment participants actions (avoid/bump) match the avatar s using the same scenario. This project has created a system that enables the CVI patient s parents and relatives to aid the understanding what the CVI patient encounter. Besides, it aids the specialists and educators to take into account all the difficulties that the patients experience. Then, is to design and develop appropriate educational programs that can help each individual patient

    Aerospace medicine and biology: A continuing bibliography with indexes, supplement 129, June 1974

    Get PDF
    This special bibliography lists 280 reports, articles, and other documents introduced into the NASA scientific and technical information system in May 1974

    Decision-Making with Heterogeneous Sensors - A Copula Based Approach

    Get PDF
    Statistical decision making has wide ranging applications, from communications and signal processing to econometrics and finance. In contrast to the classical one source-one receiver paradigm, several applications have been identified in the recent past that require acquiring data from multiple sources or sensors. Information from the multiple sensors are transmitted to a remotely located receiver known as the fusion center which makes a global decision. Past work has largely focused on fusion of information from homogeneous sensors. This dissertation extends the formulation to the case when the local sensors may possess disparate sensing modalities. Both the theoretical and practical aspects of multimodal signal processing are considered. The first and foremost challenge is to \u27adequately\u27 model the joint statistics of such heterogeneous sensors. We propose the use of copula theory for this purpose. Copula models are general descriptors of dependence. They provide a way to characterize the nonlinear functional relationships between the multiple modalities, which are otherwise difficult to formalize. The important problem of selecting the `best\u27 copula function from a given set of valid copula densities is addressed, especially in the context of binary hypothesis testing problems. Both, the training-testing paradigm, where a training set is assumed to be available for learning the copula models prior to system deployment, as well as generalized likelihood ratio test (GLRT) based fusion rule for the online selection and estimation of copula parameters are considered. The developed theory is corroborated with extensive computer simulations as well as results on real-world data. Sensor observations (or features extracted thereof) are most often quantized before their transmission to the fusion center for bandwidth and power conservation. A detection scheme is proposed for this problem assuming unifom scalar quantizers at each sensor. The designed rule is applicable for both binary and multibit local sensor decisions. An alternative suboptimal but computationally efficient fusion rule is also designed which involves injecting a deliberate disturbance to the local sensor decisions before fusion. The rule is based on Widrow\u27s statistical theory of quantization. Addition of controlled noise helps to \u27linearize\u27 the higly nonlinear quantization process thus resulting in computational savings. It is shown that although the introduction of external noise does cause a reduction in the received signal to noise ratio, the proposed approach can be highly accurate when the input signals have bandlimited characteristic functions, and the number of quantization levels is large. The problem of quantifying neural synchrony using copula functions is also investigated. It has been widely accepted that multiple simultaneously recorded electroencephalographic signals exhibit nonlinear and non-Gaussian statistics. While the existing and popular measures such as correlation coefficient, corr-entropy coefficient, coh-entropy and mutual information are limited to being bivariate and hence applicable only to pairs of channels, measures such as Granger causality, even though multivariate, fail to account for any nonlinear inter-channel dependence. The application of copula theory helps alleviate both these limitations. The problem of distinguishing patients with mild cognitive impairment from the age-matched control subjects is also considered. Results show that the copula derived synchrony measures when used in conjunction with other synchrony measures improve the detection of Alzheimer\u27s disease onset

    The Use of EEG Signals For Biometric Person Recognition

    Get PDF
    This work is devoted to investigating EEG-based biometric recognition systems. One potential advantage of using EEG signals for person recognition is the difficulty in generating artificial signals with biometric characteristics, thus making the spoofing of EEG-based biometric systems a challenging task. However, more works needs to be done to overcome certain drawbacks that currently prevent the adoption of EEG biometrics in real-life scenarios: 1) usually large number of employed sensors, 2) still relatively low recognition rates (compared with some other biometric modalities), 3) the template ageing effect. The existing shortcomings of EEG biometrics and their possible solutions are addressed from three main perspectives in the thesis: pre-processing, feature extraction and pattern classification. In pre-processing, task (stimuli) sensitivity and noise removal are investigated and discussed in separated chapters. For feature extraction, four novel features are proposed; for pattern classification, a new quality filtering method, and a novel instance-based learning algorithm are described in respective chapters. A self-collected database (Mobile Sensor Database) is employed to investigate some important biometric specified effects (e.g. the template ageing effect; using low-cost sensor for recognition). In the research for pre-processing, a training data accumulation scheme is developed, which improves the recognition performance by combining the data of different mental tasks for training; a new wavelet-based de-noising method is developed, its effectiveness in person identification is found to be considerable. Two novel features based on Empirical Mode Decomposition and Hilbert Transform are developed, which provided the best biometric performance amongst all the newly proposed features and other state-of-the-art features reported in the thesis; the other two newly developed wavelet-based features, while having slightly lower recognition accuracies, were computationally more efficient. The quality filtering algorithm is designed to employ the most informative EEG signal segments: experimental results indicate using a small subset of the available data for feature training could receive reasonable improvement in identification rate. The proposed instance-based template reconstruction learning algorithm has shown significant effectiveness when tested using both the publicly available and self-collected databases

    Brain Computer Interfaces for the Control of Robotic Swarms

    Get PDF
    abstract: A robotic swarm can be defined as a large group of inexpensive, interchangeable robots with limited sensing and/or actuating capabilities that cooperate (explicitly or implicitly) based on local communications and sensing in order to complete a mission. Its inherent redundancy provides flexibility and robustness to failures and environmental disturbances which guarantee the proper completion of the required task. At the same time, human intuition and cognition can prove very useful in extreme situations where a fast and reliable solution is needed. This idea led to the creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate the human element into the control of robotic swarms for increased robustness and reliability. The aim of the present work is to extend the current state-of-the-art in HSI by applying ideas and principles from the field of Brain-Computer Interfaces (BCI), which has proven to be very useful for people with motor disabilities. At first, a preliminary investigation about the connection of brain activity and the observation of swarm collective behaviors is conducted. After showing that such a connection may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors. The system is based on the combination of motor imagery and the input from a game controller, while its feasibility is proven through an extensive experimental process. Finally, speech imagery is proposed as an alternative mental task for BCI applications. This is done through a series of rigorous experiments and appropriate data analysis. This work suggests that the integration of BCI principles in HSI applications can be successful and it can potentially lead to systems that are more intuitive for the users than the current state-of-the-art. At the same time, it motivates further research in the area and sets the stepping stones for the potential development of the field of Brain-Swarm Interfaces (BSI).Dissertation/ThesisMasters Thesis Mechanical Engineering 201

    Deep Learning for Classification of Peak Emotions within Virtual Reality Systems

    Get PDF
    Research has demonstrated well-being benefits from positive, ‘peak’ emotions such as awe and wonder, prompting the HCI community to utilize affective computing and AI modelling for elicitation and measurement of those target emotional states. The immersive nature of virtual reality (VR) content and systems can lead to feelings of awe and wonder, especially with a responsive, personalized environment based on biosignals. However, an accurate model is required to differentiate between emotional states that have similar biosignal input, such as awe and fear. Deep learning may provide a solution since the subtleties of these emotional states and affect may be recognized, with biosignal data viewed in a time series so that researchers and designers can understand which features of the system may have influenced target emotions. The proposed deep learning fusion system in this paper will use data collected from a corpus, created through collection of physiological biosignals and ranked qualitative data, and will classify these multimodal signals into target outputs of affect. This model will be real-time for the evaluation of VR system features which influence awe/wonder, using a bio-responsive environment. Since biosignal data will be collected through wireless, wearable sensor technology, and modelled through the same computer powering the VR system, it can be used in field research and studios

    Kansei for the Digital Era

    Get PDF
    For over 40 years, Kansei-based research and development have been conducted in Japan and other East Asian countries and these decades of research have influenced Kansei interpretation. New methods and applications, including virtual reality and artificial intelligence, have emerged since the millennium, as the Kansei concept has spread throughout Europe and the rest of the world. This paper reviews past literature and industrial experience, offering a comprehensive understanding of Kansei, the underlying philosophy, and the methodology of Kansei Engineering from the approach of psychology and physiology, both qualitatively and quantitatively. The breadth of Kansei is described by examples, emerging from both industry and academia. Additionally, thematic mapping of the state-of-the-art as well as an outlook are derived from feedback obtained from structured interview of thirty-five of the most distinguished researchers in Kansei. The mapping provides insights into current trends and future directions. Kansei is unique because it includes the consideration of emotion in the design of products and services. The paper aims at becoming a reference for researchers, practitioners, and stakeholders across borders and cultures, looking for holistic perspectives on Kansei, Kansei Engineering, and implementation methods. The novelty of the paper resides in the unification of authors amongst pioneers from different parts of the world, spanning across diversified academic backgrounds, knowledge areas and industries

    EmoEEG - recognising people's emotions using electroencephalography

    Get PDF
    Tese de mestrado integrado em Engenharia Biomédica e Biofísica (Sinais e Imagens Médicas), Universidade de Lisboa, Faculdade de Ciências, 2020As emoções desempenham um papel fulcral na vida humana, estando envolvidas numa extensa variedade de processos cognitivos, tais como tomada de decisão, perceção, interações sociais e inteligência. As interfaces cérebro-máquina (ICM) são sistemas que convertem os padrões de atividade cerebral de um utilizador em mensagens ou comandos para uma determinada aplicação. Os usos mais comuns desta tecnologia permitem que pessoas com deficiência motora controlem braços mecânicos, cadeiras de rodas ou escrevam. Contudo, também é possível utilizar tecnologias ICM para gerar output sem qualquer controle voluntário. A identificação de estados emocionais é um exemplo desse tipo de feedback. Por sua vez, esta tecnologia pode ter aplicações clínicas tais como a identificação e monitorização de patologias psicológicas, ou aplicações multimédia que facilitem o acesso a músicas ou filmes de acordo com o seu conteúdo afetivo. O interesse crescente em estabelecer interações emocionais entre máquinas e pessoas, levou à necessidade de encontrar métodos fidedignos de reconhecimento emocional automático. Os autorrelatos podem não ser confiáveis devido à natureza subjetiva das próprias emoções, mas também porque os participantes podem responder de acordo com o que acreditam que os outros responderiam. A fala emocional é uma maneira eficaz de deduzir o estado emocional de uma pessoa, pois muitas características da fala são independentes da semântica ou da cultura. No entanto, a precisão ainda é insuficiente quando comparada com outros métodos, como a análise de expressões faciais ou sinais fisiológicos. Embora o primeiro já tenha sido usado para identificar emoções com sucesso, ele apresenta desvantagens, tais como o fato de muitas expressões faciais serem "forçadas" e o fato de que as leituras só são possíveis quando o rosto do sujeito está dentro de um ângulo muito específico em relação à câmara. Por estes motivos, a recolha de sinais fisiológicos tem sido o método preferencial para o reconhecimento de emoções. O uso do EEG (eletroencefalograma) permite-nos monitorizar as emoções sentidas sob a forma de impulsos elétricos provenientes do cérebro, permitindo assim obter uma ICM para o reconhecimento afetivo. O principal objetivo deste trabalho foi estudar a combinação de diferentes elementos para identificar estados afetivos, estimando valores de valência e ativação usando sinais de EEG. A análise realizada consistiu na criação de vários modelos de regressão para avaliar como diferentes elementos afetam a precisão na estimativa de valência e ativação. Os referidos elementos foram os métodos de aprendizagem automática, o género do indivíduo, o conceito de assimetria cerebral, os canais de elétrodos utilizados, os algoritmos de extração de características e as bandas de frequências analisadas. Com esta análise foi possível criarmos o melhor modelo possível, com a combinação de elementos que maximiza a sua precisão. Para alcançar os nossos objetivos, recorremos a duas bases de dados (AMIGOS e DEAP) contendo sinais de EEG obtidos durante experiências de desencadeamento emocional, juntamente com a autoavaliação realizada pelos respetivos participantes. Nestas experiências, os participantes visionaram excertos de vídeos de conteúdo afetivo, de modo a despoletar emoções sobre eles, e depois classificaram-nas atribuindo o nível de valência e ativação experienciado. Os sinais EEG obtidos foram divididos em epochs de 4s e de seguida procedeu-se à extração de características através de diferentes algoritmos: o primeiro, segundo e terceiro parâmetros de Hjorth; entropia espectral; energia e entropia de wavelets; energia e entropia de FMI (funções de modos empíricos) obtidas através da transformada de Hilbert-Huang. Estes métodos de processamento de sinal foram escolhidos por já terem gerado resultados bons noutros trabalhos relacionados. Todos estes métodos foram aplicados aos sinais EEG dentro das bandas de frequência alfa, beta e gama, que também produziram bons resultados de acordo com trabalhos já efetuados. Após a extração de características dos sinais EEG, procedeu-se à criação de diversos modelos de estimação da valência e ativação usando as autoavaliações dos participantes como “verdade fundamental”. O primeiro conjunto de modelos criados serviu para aferir quais os melhores métodos de aprendizagem automática a utilizar para os testes vindouros. Após escolher os dois melhores, tentámos verificar as diferenças no processamento emocional entre os sexos, realizando a estimativa em homens e mulheres separadamente. O conjunto de modelos criados a seguir visou testar o conceito da assimetria cerebral, que afirma que a valência emocional está relacionada com diferenças na atividade fisiológica entre os dois hemisférios cerebrais. Para este teste específico, foram consideradas a assimetria diferencial e racional segundo pares de elétrodos homólogos. Depois disso, foram criados modelos de estimação de valência e ativação considerando cada um dos elétrodos individualmente. Ou seja, os modelos seriam gerados com todos os métodos de extração de características, mas com os dados obtidos de um elétrodo apenas. Depois foram criados modelos que visassem comparar cada um dos algoritmos de extração de características utilizados. Os modelos gerados nesta fase incluíram os dados obtidos de todos os elétrodos, já que anteriormente se verificou que não haviam elétrodos significativamente melhores que outros. Por fim, procedeu-se à criação dos modelos com a melhor combinação de elementos possível, otimizaram-se os parâmetros dos mesmos, e procurámos também aferir a sua validação. Realizámos também um processo de classificação emocional associando cada par estimado de valores de valência e ativação ao quadrante correspondente no modelo circumplexo de afeto. Este último passo foi necessário para conseguirmos comparar o nosso trabalho com as soluções existentes, pois a grande maioria delas apenas identificam o quadrante emocional, não estimando valores para a valência e ativação. Em suma, os melhores métodos de aprendizagem automática foram RF (random forest) e KNN (k-nearest neighbours), embora a combinação dos melhores métodos de extração de características fosse diferente para os dois. KNN apresentava melhor precisão considerando todos os métodos de extração menos a entropia espectral, enquanto que RF foi mais preciso considerando apenas o primeiro parâmetro de Hjorth e a energia de wavelets. Os valores dos coeficientes de Pearson obtidos para os melhores modelos otimizados ficaram compreendidos entre 0,8 e 0,9 (sendo 1 o valor máximo). Não foram registados melhoramentos nos resultados considerando cada género individualmente, pelo que os modelos finais foram criados usando os dados de todos os participantes. É possível que a diminuição da precisão dos modelos criados para cada género seja resultado da menor quantidade de dados envolvidos no processo de treino. O conceito de assimetria cerebral só foi útil nos modelos criados usando a base de dados DEAP, especialmente para a estimação de valência usando as características extraídas segundo a banda alfa. Em geral, as nossas abordagens mostraram-se a par ou mesmo superiores a outros trabalhos, obtendo-se valores de acurácia de 86.5% para o melhor modelo de classificação gerado com a base de dados AMIGOS e 86.6% usando a base de dados DEAP.Emotion recognition is a field within affective computing that is gaining increasing relevance and strives to predict an emotional state using physiological signals. Understanding how these biological factors are expressed according to one’s emotions can enhance the humancomputer interaction (HCI). This knowledge, can then be used for clinical applications such as the identification and monitoring of psychiatric disorders. It can also be used to provide better access to multimedia content, by assigning affective tags to videos or music. The goal of this work was to create several models for estimating values of valence and arousal, using features extracted from EEG signals. The different models created were meant to compare how various elements affected the accuracy of the model created. These elements were the machine learning techniques, the gender of the individual, the brain asymmetry concept, the electrode channels, the feature extraction methods and the frequency of the brain waves analysed. The final models contained the best combination of these elements and achieved PCC values over 0.80. As a way to compare our work with previous approaches, we also implemented a classification procedure to find the correspondent quadrant in the valence and arousal space according to the circumplex model of affect. The best accuracies achieved were over 86%, which was on par or even superior to some of the works already done

    Análise espectral com wavelets do ECoG em crises epilépticas

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaA epilepsia é uma das doenças neurológicas com maior incidência, atingindo cerca de 60 milhões de pessoas em todo o Mundo, sendo que cerca de 30% não apresentam sucesso no tratamento farmacológico. Sobretudo nestes casos, é fundamental localizar com precisão a zona epileptogénica, para possível remoção cirúrgica. Um dos instrumentos usado para a localização precisa do foco epileptogénico consiste no registo intracraniano (electrocorticograma, ECoG). Para o fim acima indicado, várias técnicas de análise do electroencefalograma (EEG) são usadas, nomeadamente análise visual das séries temporais, fMRI-EEG e análise espectral. Em tempos recentes os investigadores têm dirigido a sua atenção para uma análise espectral que inclua também informação temporal, as representações tempo- frequência. A ideia é investigar a evolução da dinâmica espectral do sinal na transição para a crise epiléptica, no sentido de encontrar marcadores mais precisos que possam localizar o eléctrodo mais próximo do foco epileptogénico. Tem sido sugerida a necessidade de uma análise de alta resolução no domínio da frequência que habitualmente não é contemplada, principalmente porque os sistemas de aquisição típicos de EEG têm uma frequência de amostragem na ordem das centenas de Hz. Parece clara a necessidade de estender a análise espectral para o domínio dos milhares de Hz e relativamente à amplitude do sinal, para o domínio dos micro sinais. Neste sentido, a análise com Wavelets tem sido reconhecida pelos investigadores em processamento de sinal biomédico como uma ferramenta poderosa para a análise em alta resolução no tempo e na frequência. Neste trabalho é desenvolvida uma ferramenta para a visualização do ECoG como série temporal e também no espaço tempo- frequência, usando Wavelets. O objectivo a implementar consiste na representação dos canais do ECoG simultaneamente, usando Wavelets de análise facilmente seleccionados pelo utilizador. Será a versão beta de um sistema que se possa aperfeiçoar progressivamente
    corecore