1,255 research outputs found

    Biomimetic Based EEG Learning for Robotics Complex Grasping and Dexterous Manipulation

    Get PDF
    There have been tremendous efforts to understand the biological nature of human grasping, in such a way that it can be learned and copied to prosthesis–robotics and dextrous grasping applications. Several biomimetic methods and techniques have been adopted, hence applied to analytically comprehend ways human performs grasping to duplicate human knowledge. A major topic for further study, is related to decoding the resulting EEG brainwaves during motorizing of fingers and moving parts. To accomplish this, there are a number of phases that are performed, including recording, pre-processing, filtration, and understanding of the waves. However, there are two important phases that have received substantial research attentions. The classification and decoding, of such massive and complex brain waves, as they are two important steps towards understanding patterns during grasping. In this respect, the fundamental objective of this research is to demonstrate how to employ advanced pattern recognition methods, like fuzzy c-mean clustering for understanding resulting EEG brain waves, in such a way to control a prosthesis or robotic hand, while relying sets of detected EEG brainwaves. There are a number of decoding and classification methods and techniques, however we shall look into fuzzy based clustering blended with principle component analysis (PAC) technique to help for the decoding mechanism. EEG brainwaves during a grasping and manipulation have been used for this analysis. This involves, movement of almost five fingers during a grasping defined task. The study has found that, it is not a straight forward task to decode all human fingers motions, as due to the complexity of grasping tasks. However, the adopted analysis was able to classify and identify the different narrowly performed and related fundamental events during a simple grasping task

    PATTERN: Pain Assessment for paTients who can't TEll using Restricted Boltzmann machiNe.

    Get PDF
    BackgroundAccurately assessing pain for those who cannot make self-report of pain, such as minimally responsive or severely brain-injured patients, is challenging. In this paper, we attempted to address this challenge by answering the following questions: (1) if the pain has dependency structures in electronic signals and if so, (2) how to apply this pattern in predicting the state of pain. To this end, we have been investigating and comparing the performance of several machine learning techniques.MethodsWe first adopted different strategies, in which the collected original n-dimensional numerical data were converted into binary data. Pain states are represented in binary format and bound with above binary features to construct (n + 1) -dimensional data. We then modeled the joint distribution over all variables in this data using the Restricted Boltzmann Machine (RBM).ResultsSeventy-eight pain data items were collected. Four individuals with the number of recorded labels larger than 1000 were used in the experiment. Number of avaliable data items for the four patients varied from 22 to 28. Discriminant RBM achieved better accuracy in all four experiments.ConclusionThe experimental results show that RBM models the distribution of our binary pain data well. We showed that discriminant RBM can be used in a classification task, and the initial result is advantageous over other classifiers such as support vector machine (SVM) using PCA representation and the LDA discriminant method

    Biosignals as an Advanced Man-Machine Interface

    Get PDF
    As is known for centuries, humans exhibit an electrical profile. This profile is altered through various physiological processes, which can be measured through biosignals; e.g., electromyography (EMG) and electrodermal activity (EDA). These biosignals can reveal our emotions and, as such, can serve as an advanced man-machine interface (MMI) for empathic consumer products. However, such an MMI requires the correct classification of biosignals to emotion classes. This paper explores the use of EDA and three facial EMG signals to determine neutral, positive, negative, and mixed emotions, using recordings of 24 people. A range of techniques is tested, which resulted in a generic framework for automated emotion classification with up to 61.31% correct classification of the four emotion classes, without the need of personal profiles. Among various other directives for future research, the results emphasize the need for both personalized biosignal-profiles and the recording of multiple biosignals in parallel

    PCA-SIR: a new nonlinear supervised dimension reduction method with application to pain prediction from EEG

    Get PDF
    Dimension reduction is critical in identifying a small set of discriminative features that are predictive of behavior or cognition from high-dimensional neuroimaging data, such as EEG and fMRI. In the present study, we proposed a novel nonlinear supervised dimension reduction technique, named PCA-SIR (Principal Component Analysis and Sliced Inverse Regression), for analyzing high-dimensional EEG time-course data. Compared with conventional dimension reduction methods used for EEG, such as PCA and partial least-squares (PLS), the PCA-SIR method can make use of nonlinear relationship between class labels (i.e., behavioral or cognitive parameters) and predictors (i.e., EEG samples) to achieve the effective dimension reduction (e.d.r.) directions. We applied the new PCA-SIR method to predict the subjective pain perception (at a level ranging from 0 to 10) from single-trial laser-evoked EEG time courses. Experimental results on 96 subjects showed that reduced features by PCA-SIR can lead to significantly higher prediction accuracy than those by PCA and PLS. Therefore, PCA-SIR could be a promising supervised dimension reduction technique for multivariate pattern analysis of high-dimensional neuroimaging data. © 2015 IEEE.published_or_final_versio

    Classification of EEG signals of user states in gaming using machine learning

    Get PDF
    In this research, brain activity of user states was analyzed using machine learning algorithms. When a user interacts with a computer-based system including playing computer games like Tetris, he or she may experience user states such as boredom, flow, and anxiety. The purpose of this research is to apply machine learning models to Electroencephalogram (EEG) signals of three user states -- boredom, flow and anxiety -- to identify and classify the EEG correlates for these user states. We focus on three research questions: (i) How well do machine learning models like support vector machine, random forests, multinomial logistic regression, and k-nearest neighbor classify the three user states -- Boredom, Flow, and Anxiety? (ii) Can we distinguish the flow state from other user states using machine learning models? (iii) What are the essential components of EEG signals for classifying the three user states? To extract the critical components of EEG signals, a feature selection method known as minimum redundancy and maximum relevance method was implemented. An average accuracy of 85 % is achieved for classifying the three user states by using the support vector machine classifier --Abstract, page iii

    Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses

    Full text link
    Tesis por compendio[ES] El uso de la realidad virtual (RV) se ha incrementado notablemente en la comunidad científica para la investigación del comportamiento humano. En particular, la RV inmersiva ha crecido debido a la democratización de las gafas de realidad virtual o head mounted displays (HMD), que ofrecen un alto rendimiento con una inversión económica. Uno de los campos que ha emergido con fuerza en la última década es el Affective Computing, que combina psicofisiología, informática, ingeniería biomédica e inteligencia artificial, desarrollando sistemas que puedan reconocer emociones automáticamente. Su progreso es especialmente importante en el campo de la investigación del comportamiento humano, debido al papel fundamental que las emociones juegan en muchos procesos psicológicos como la percepción, la toma de decisiones, la creatividad, la memoria y la interacción social. Muchos estudios se han centrado en intentar obtener una metodología fiable para evocar y automáticamente identificar estados emocionales, usando medidas fisiológicas objetivas y métodos de aprendizaje automático. Sin embargo, la mayoría de los estudios previos utilizan imágenes, audios o vídeos para generar los estados emocionales y, hasta donde llega nuestro conocimiento, ninguno de ellos ha desarrollado un sistema de reconocimiento emocional usando RV inmersiva. Aunque algunos trabajos anteriores sí analizan las respuestas fisiológicas en RV inmersivas, estos no presentan modelos de aprendizaje automático para procesamiento y clasificación automática de bioseñales. Además, un concepto crucial cuando se usa la RV en investigación del comportamiento humano es la validez: la capacidad de evocar respuestas similares en un entorno virtual a las evocadas por el espacio físico. Aunque algunos estudios previos han usado dimensiones psicológicas y cognitivas para comparar respuestas entre entornos reales y virtuales, las investigaciones que analizan respuestas fisiológicas o comportamentales están mucho menos extendidas. Según nuestros conocimientos, este es el primer trabajo que compara entornos físicos con su réplica en RV, empleando respuestas fisiológicas y algoritmos de aprendizaje automático y analizando la capacidad de la RV de transferir y extrapolar las conclusiones obtenidas al entorno real que se está simulando. El objetivo principal de la tesis es validar el uso de la RV inmersiva como una herramienta de estimulación emocional usando respuestas psicofisiológicas y comportamentales en combinación con algoritmos de aprendizaje automático, así como realizar una comparación directa entre un entorno real y virtual. Para ello, se ha desarrollado un protocolo experimental que incluye entornos emocionales 360º, un museo real y una virtualización 3D altamente realista del mismo museo. La tesis presenta novedosas contribuciones del uso de la RV inmersiva en la investigación del comportamiento humano, en particular en lo relativo al estudio de las emociones. Esta ayudará a aplicar metodologías a estímulos más realistas para evaluar entornos y situaciones de la vida diaria, superando las actuales limitaciones de la estimulación emocional que clásicamente ha incluido imágenes, audios o vídeos. Además, en ella se analiza la validez de la RV realizando una comparación directa usando una simulación altamente realista. Creemos que la RV inmersiva va a revolucionar los métodos de estimulación emocional en entornos de laboratorio. Además, su sinergia junto a las medidas fisiológicas y las técnicas de aprendizaje automático, impactarán transversalmente en muchas áreas de investigación como la arquitectura, la salud, la evaluación psicológica, el entrenamiento, la educación, la conducción o el marketing, abriendo un nuevo horizonte de oportunidades para la comunidad científica. La presente tesis espera contribuir a caminar en esa senda.[EN] In recent years the scientific community has significantly increased its use of virtual reality (VR) technologies in human behaviour research. In particular, the use of immersive VR has grown due to the introduction of affordable, high performance head mounted displays (HMDs). Among the fields that has strongly emerged in the last decade is affective computing, which combines psychophysiology, computer science, biomedical engineering and artificial intelligence in the development of systems that can automatically recognize emotions. The progress of affective computing is especially important in human behaviour research due to the central role that emotions play in many background processes, such as perception, decision-making, creativity, memory and social interaction. Several studies have tried to develop a reliable methodology to evoke and automatically identify emotional states using objective physiological measures and machine learning methods. However, the majority of previous studies used images, audio or video to elicit emotional statements; to the best of our knowledge, no previous research has developed an emotion recognition system using immersive VR. Although some previous studies analysed physiological responses in immersive VR, they did not use machine learning techniques for biosignal processing and classification. Moreover, a crucial concept when using VR for human behaviour research is validity: the capacity to evoke a response from the user in a simulated environment similar to the response that might be evoked in a physical environment. Although some previous studies have used psychological and cognitive dimensions to compare responses in real and virtual environments, few have extended this research to analyse physiological or behavioural responses. Moreover, to our knowledge, this is the first study to compare VR scenarios with their real-world equivalents using physiological measures coupled with machine learning algorithms, and to analyse the ability of VR to transfer and extrapolate insights obtained from VR environments to real environments. The main objective of this thesis is, using psycho-physiological and behavioural responses in combination with machine learning methods, and by performing a direct comparison between a real and virtual environment, to validate immersive VR as an emotion elicitation tool. To do so we develop an experimental protocol involving emotional 360º environments, an art exhibition in a real museum, and a highly-realistic 3D virtualization of the same art exhibition. This thesis provides novel contributions to the use of immersive VR in human behaviour research, particularly in relation to emotions. VR can help in the application of methodologies designed to present more realistic stimuli in the assessment of daily-life environments and situations, thus overcoming the current limitations of affective elicitation, which classically uses images, audio and video. Moreover, it analyses the validity of VR by performing a direct comparison using highly-realistic simulation. We believe that immersive VR will revolutionize laboratory-based emotion elicitation methods. Moreover, its synergy with physiological measurement and machine learning techniques will impact transversely in many other research areas, such as architecture, health, assessment, training, education, driving and marketing, and thus open new opportunities for the scientific community. The present dissertation aims to contribute to this progress.[CA] L'ús de la realitat virtual (RV) s'ha incrementat notablement en la comunitat científica per a la recerca del comportament humà. En particular, la RV immersiva ha crescut a causa de la democratització de les ulleres de realitat virtual o head mounted displays (HMD), que ofereixen un alt rendiment amb una reduïda inversió econòmica. Un dels camps que ha emergit amb força en l'última dècada és el Affective Computing, que combina psicofisiologia, informàtica, enginyeria biomèdica i intel·ligència artificial, desenvolupant sistemes que puguen reconéixer emocions automàticament. El seu progrés és especialment important en el camp de la recerca del comportament humà, a causa del paper fonamental que les emocions juguen en molts processos psicològics com la percepció, la presa de decisions, la creativitat, la memòria i la interacció social. Molts estudis s'han centrat en intentar obtenir una metodologia fiable per a evocar i automàticament identificar estats emocionals, utilitzant mesures fisiològiques objectives i mètodes d'aprenentatge automàtic. No obstant això, la major part dels estudis previs utilitzen imatges, àudios o vídeos per a generar els estats emocionals i, fins on arriba el nostre coneixement, cap d'ells ha desenvolupat un sistema de reconeixement emocional mitjançant l'ús de la RV immersiva. Encara que alguns treballs anteriors sí que analitzen les respostes fisiològiques en RV immersives, aquests no presenten models d'aprenentatge automàtic per a processament i classificació automàtica de biosenyals. A més, un concepte crucial quan s'utilitza la RV en la recerca del comportament humà és la validesa: la capacitat d'evocar respostes similars en un entorn virtual a les evocades per l'espai físic. Encara que alguns estudis previs han utilitzat dimensions psicològiques i cognitives per a comparar respostes entre entorns reals i virtuals, les recerques que analitzen respostes fisiològiques o comportamentals estan molt menys esteses. Segons els nostres coneixements, aquest és el primer treball que compara entorns físics amb la seua rèplica en RV, emprant respostes fisiològiques i algorismes d'aprenentatge automàtic i analitzant la capacitat de la RV de transferir i extrapolar les conclusions obtingudes a l'entorn real que s'està simulant. L'objectiu principal de la tesi és validar l'ús de la RV immersiva com una eina d'estimulació emocional usant respostes psicofisiològiques i comportamentals en combinació amb algorismes d'aprenentatge automàtic, així com realitzar una comparació directa entre un entorn real i virtual. Per a això, s'ha desenvolupat un protocol experimental que inclou entorns emocionals 360º, un museu real i una virtualització 3D altament realista del mateix museu. La tesi presenta noves contribucions de l'ús de la RV immersiva en la recerca del comportament humà, en particular quant a l'estudi de les emocions. Aquesta ajudarà a aplicar metodologies a estímuls més realistes per a avaluar entorns i situacions de la vida diària, superant les actuals limitacions de l'estimulació emocional que clàssicament ha inclòs imatges, àudios o vídeos. A més, en ella s'analitza la validesa de la RV realitzant una comparació directa usant una simulació altament realista. Creiem que la RV immersiva revolucionarà els mètodes d'estimulació emocional en entorns de laboratori. A més, la seua sinergia al costat de les mesures fisiològiques i les tècniques d'aprenentatge automàtic, impactaran transversalment en moltes àrees de recerca com l'arquitectura, la salut, l'avaluació psicològica, l'entrenament, l'educació, la conducció o el màrqueting, obrint un nou horitzó d'oportunitats per a la comunitat científica. La present tesi espera contribuir a caminar en aquesta senda.Marín Morales, J. (2020). Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/148717TESISCompendi
    corecore