216 research outputs found

    Low-cost methodologies and devices applied to measure, model and self-regulate emotions for Human-Computer Interaction

    Get PDF
    En aquesta tesi s'exploren les diferents metodologies d'anàlisi de l'experiència UX des d'una visió centrada en usuari. Aquestes metodologies clàssiques i fonamentades només permeten extreure dades cognitives, és a dir les dades que l'usuari és capaç de comunicar de manera conscient. L'objectiu de la tesi és proposar un model basat en l'extracció de dades biomètriques per complementar amb dades emotives (i formals) la informació cognitiva abans esmentada. Aquesta tesi no és només teòrica, ja que juntament amb el model proposat (i la seva evolució) es mostren les diferents proves, validacions i investigacions en què s'han aplicat, sovint en conjunt amb grups de recerca d'altres àrees amb èxit.En esta tesis se exploran las diferentes metodologías de análisis de la experiencia UX desde una visión centrada en usuario. Estas metodologías clásicas y fundamentadas solamente permiten extraer datos cognitivos, es decir los datos que el usuario es capaz de comunicar de manera consciente. El objetivo de la tesis es proponer un modelo basado en la extracción de datos biométricos para complementar con datos emotivos (y formales) la información cognitiva antes mencionada. Esta tesis no es solamente teórica, ya que junto con el modelo propuesto (y su evolución) se muestran las diferentes pruebas, validaciones e investigaciones en la que se han aplicado, a menudo en conjunto con grupos de investigación de otras áreas con éxito.In this thesis, the different methodologies for analyzing the UX experience are explored from a user-centered perspective. These classical and well-founded methodologies only allow the extraction of cognitive data, that is, the data that the user is capable of consciously communicating. The objective of this thesis is to propose a methodology that uses the extraction of biometric data to complement the aforementioned cognitive information with emotional (and formal) data. This thesis is not only theoretical, since the proposed model (and its evolution) is complemented with the different tests, validations and investigations in which they have been applied, often in conjunction with research groups from other areas with success

    User emotional interaction processor: a tool to support the development of GUIs through physiological user monitoring

    Get PDF
    Ever since computers have entered humans' daily lives, the activity between the human and the digital ecosystems has increased. This increase encourages the development of smarter and more user-friendly human-computer interfaces. However, to test these interfaces, the means of interaction have been limited, for the most part restricted to the conventional interface, the "manual" interface, where physical input is required, where participants who test these interfaces use a keyboard, mouse, or a touch screen, and where communication between participants and designers is required. There is another method, which will be applied in this dissertation, which does not require physical input from the participants, which is called Affective Computing. This dissertation presents the development of a tool to support the development of graphical interfaces, based on the monitoring of psychological and physiological aspects of the user (emotions and attention), aiming to improve the experience of the end user, with the ultimate goal of improving the interface design. The development of this tool will be described. The results, provided by designers from an IT company, suggest that the tool is useful but that the optimized interface generated by it still has some flaws. These flaws are mainly related to the lack of consideration of a general context in the interface generation process.Desde que os computadores entraram na vida diária dos humanos, a atividade entre o ecossistema humano e o digital tem aumentado. Este aumento estimula o desenvolvimento de interfaces humano-computador mais inteligentes e apelativas ao utilizador. No entanto, para testar estas interfaces, os meios de interação têm sido limitados, em grande parte restritos à interface convencional, a interface "manual", onde é preciso "input" físico, onde os participantes que testam estas interface, usam um teclado, um rato ou um "touch screen", e onde a comunicação dos participantes com os designers é necessária. Existe outro método, que será aplicado nesta dissertação, que não necessita de "input" físico dos participantes, que se denomina de "Affective Computing". Esta dissertação apresenta o desenvolvimento de uma ferramenta de suporte ao desenvolvimento de interfaces gráficas, baseada na monitorização de aspetos psicológicos e fisiológicos do utilizador (emoções e atenção), visando melhorar a experiência do utilizador final, com o objetivo último de melhorar o design da interface. O desenvolvimento desta ferramenta será descrito. Os resultados, dados por designers de uma empresa de IT, sugerem que esta é útil, mas que a interface otimizada gerada pela mesma tem ainda algumas falhas. Estas falhas estão, principalmente, relacionadas com a ausência de consideração de um contexto geral no processo de geração da interface

    Applying affective design patterns in VR firefighter training simulator

    Get PDF
    We present a prototype of virtual reality training simulator for firefighters. Our approach is based on the concept of Affective Patterns in Serious Games. One of the most serious problems when it comes to training firefighters is to maintain the right level of their commitment. The idea to solve the problem of repetitive and monotonous exercises is to combine them with those implemented in VR. While creating the solution for optimizing a psychological background of knowledge acquisition in training, we used concepts from the Motivational Intensity Theory

    Machine Learning in Driver Drowsiness Detection: A Focus on HRV, EDA, and Eye Tracking

    Get PDF
    Drowsy driving continues to be a significant cause of road traffic accidents, necessi- tating the development of robust drowsiness detection systems. This research enhances our understanding of driver drowsiness by analyzing physiological indicators – heart rate variability (HRV), the percentage of eyelid closure over the pupil over time (PERCLOS), blink rate, blink percentage, and electrodermal activity (EDA) signals. Data was collected from 40 participants in a controlled scenario, with half of the group driving in a non- monotonous scenario and the other half in a monotonous scenario. Participant fatigue was assessed twice using the Fatigue Assessment Scale (FAS). The research developed three machine learning models: HRV-Based Model, EDA- Based Model, and Eye-Based Model, achieving accuracy rates of 98.28%, 96.32%, and 90% respectively. These models were trained on the aforementioned physiological data, and their effectiveness was evaluated against a range of advanced machine learning models including GRU, Transformers, Mogrifier LSTM, Momentum LSTM, Difference Target Propagation, and Decoupled Neural Interfaces Using Synthetic Gradients. The HRV-Based Model and EDA-Based Model demonstrated robust performance in classifying driver drowsiness. However, the Eye-Based Model had some difficulty accurately identifying instances of drowsiness, likely due to the imbalanced dataset and underrepre- sentation of certain fatigue states. The study duration, which was confined to 45 minutes, could have contributed to this imbalance, suggesting that longer data collection periods might yield more balanced datasets. The average fatigue scores obtained from the FAS before and after the experiment showed a relatively consistent level of reported fatigue among participants, highlighting the potential impact of external factors on fatigue levels. By integrating the outcomes of these individual models, each demonstrating strong performance, this research establishes a comprehensive and robust drowsiness detection system. The HRV-Based Model displayed remarkable accuracy, while the EDA-Based Model and the Eye-Based Model contributed valuable insights despite some limitations. The research highlights the necessity of further optimization, including more balanced data collection and investigation of individual and external factors impacting drowsiness. Despite the challenges, this work significantly contributes to the ongoing efforts to improve road safety by laying the foundation for effective real-time drowsiness detection systems and intervention methods

    iMind: Uma ferramenta inteligente para suporte de compreensão de conteúdo

    Get PDF
    Usually while reading, content comprehension difficulty affects individual performance. Comprehension difficulties, e. g., could lead to a slow learning process, lower work quality, and inefficient decision-making. This thesis introduces an intelligent tool called “iMind” which uses wearable devices (e.g., smartwatches) to evaluate user comprehension difficulties and engagement levels while reading digital content. Comprehension difficulty can occur when there are not enough mental resources available for mental processing. The mental resource for mental processing is the cognitive load (CL). Fluctuations of CL lead to physiological manifestation of the autonomic nervous system (ANS), which can be measured by wearables, like smartwatches. ANS manifestations are, e. g., an increase in heart rate. With low-cost eye trackers, it is possible to correlate content regions to the measurements of ANS manifestation. In this sense, iMind uses a smartwatch and an eye tracker to identify comprehension difficulty at content regions level (where the user is looking). The tool uses machine learning techniques to classify content regions as difficult or non-difficult based on biometric and non-biometric features. The tool classified regions with a 75% accuracy and 80% f-score with Linear regression (LR). With the classified regions, it will be possible, in the future, to create contextual support for the reader in real-time by, e.g., translating the sentences that induced comprehension difficulty.Normalmente durante a leitura, a dificuldade de compreensão pode afetar o desempenho da leitura. A dificuldade de compreensão pode levar a um processo de aprendizagem mais lento, menor qualidade de trabalho ou uma ineficiente tomada de decisão. Esta tese apresenta uma ferramenta inteligente chamada “iMind” que usa dispositivos vestíveis (por exemplo, smartwatches) para avaliar a dificuldade de compreensão do utilizador durante a leitura de conteúdo digital. A dificuldade de compreensão pode ocorrer quando não há recursos mentais disponíveis suficientes para o processamento mental. O recurso usado para o processamento mental é a carga cognitiva (CL). As flutuações de CL levam a manifestações fisiológicas do sistema nervoso autônomo (ANS), manifestações essas, que pode ser medido por dispositivos vestíveis, como smartwatches. As manifestações do ANS são, por exemplo, um aumento da frequência cardíaca. Com eye trackers de baixo custo, é possível correlacionar manifestação do ANS com regiões do texto, por exemplo. Neste sentido, a ferramenta iMind utiliza um smartwatch e um eye tracker para identificar dificuldades de compreensão em regiões de conteúdo (para onde o utilizador está a olhar). Adicionalmente a ferramenta usa técnicas de machine learning para classificar regiões de conteúdo como difíceis ou não difíceis com base em features biométricos e não biométricos. A ferramenta classificou regiões com uma precisão de 75% e f-score de 80% usando regressão linear (LR). Com a classificação das regiões em tempo real, será possível, no futuro, criar suporte contextual para o leitor em tempo real onde, por exemplo, as frases que induzem dificuldade de compreensão são traduzidas

    Eyempact: um sistema de baixo custo para análise do olhar e fisiologia

    Get PDF
    The usual signal to evaluate human attention is eye tracking. Beyond this, bioelectrical signals are also used in this context. The field of research that studies human attention, as well as human behavior and emotions, using these signals is called psychophysiology. Psychophysiological studies usually make use of movies or images to trigger emotions, feelings and behaviors. However, the participants do not always pay the expected attention to the stimulus, which may be justified by the stimulus itself. This avoidance behavior, if not intended at all, can have a negative influence on the psychophysiological and behavioral evaluation a posteriori. Therefore, it is important to understand the effective attention of the participant in the entire experience and have special caution in idealization and designing of said experience. The objective of this work is to provide a set of software tools, the Eyempact system, to evaluate the impact of visual stimuli (static images or movies) on participants, using eye tracking (Tobii eye tracker). Other physiological variables are also monitored (ECG, EDA, EMG using BITalino board). Eyempact is capable of (1) synchronously acquire all this information, and (2) provide a comprehensive offline review environment to support domain specific analysis, namely by psychophysiology features identification. The system was successfully utilized in a case study, the review environment was very useful to visually evaluate specific participant sessions and to export data metrics and measurements for the data analysis presented in this document.Um sinal que é habitualmente usado para estudar a atenção humana é o olhar. Para além deste, sinais bioelétricos também são usados neste contexto. O ramo de pesquisa que estuda a atenção humana, assim como comportamentos e emoções, utilizando estes sinais é chamado de psicofisiologia. Estudos de psicofisiologia utilizam vídeos e imagens para despertar emoções, sentimentos e comportamentos. No entanto, os participantes nem sempre prestam atenção aos estímulos, algo que pode ser justificado pelos próprios estímulos. Este comportamento de evitamento, se não for de todo pretendido (de acordo com o desenho da experiência), pode ter um impacto negativo na avaliação psicofisiológica e comportamental realizada a posteriori. Deste modo, é essencial compreender a atenção efetiva do participante durante toda a experiência, e ter especial cuidado durante o processo de idealização e desenho da mesma. O objetivo deste trabalho é fornecer um conjunto de ferramentas, o sistema Eyempact, para a avaliação do impacto de estímulos visuais (imagens estáticas ou vídeos) em participantes utilizando o seguimento do olhar (capturado pelo Tobii eye tracker). Outras variáveis fisiológicas, ECG, EDA, EMG, são também monitorizadas (utilizando o sensor BITalino). Eyempact é capaz de (1) adquirir simultaneamente toda esta informação, e (2) fornecer um ambiente de revisão dos dados (offline) que auxilia a análise deste tipo de dados. O sistema foi utilizado com sucesso num caso de estudo, o ambiente de revisão foi útil para avaliar visualmente sessões específicas de alguns participantes e para exportar métricas e medidas para a análise de dados apresentada neste documento.Mestrado em Engenharia de Computadores e Telemátic

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization
    corecore