1,133 research outputs found

    Augmenting dementia cognitive assessment with instruction-less eye-tracking tests

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Eye-tracking technology is an innovative tool that holds promise for enhancing dementia screening. In this work, we introduce a novel way of extracting salient features directly from the raw eye-tracking data of a mixed sample of dementia patients during a novel instruction-less cognitive test. Our approach is based on self-supervised representation learning where, by training initially a deep neural network to solve a pretext task using well-defined available labels (e.g. recognising distinct cognitive activities in healthy individuals), the network encodes high-level semantic information which is useful for solving other problems of interest (e.g. dementia classification). Inspired by previous work in explainable AI, we use the Layer-wise Relevance Propagation (LRP) technique to describe our network's decisions in differentiating between the distinct cognitive activities. The extent to which eye-tracking features of dementia patients deviate from healthy behaviour is then explored, followed by a comparison between self-supervised and handcrafted representations on discriminating between participants with and without dementia. Our findings not only reveal novel self-supervised learning features that are more sensitive than handcrafted features in detecting performance differences between participants with and without dementia across a variety of tasks, but also validate that instruction-less eye-tracking tests can detect oculomotor biomarkers of dementia-related cognitive dysfunction. This work highlights the contribution of self-supervised representation learning techniques in biomedical applications where the small number of patients, the non-homogenous presentations of the disease and the complexity of the setting can be a challenge using state-of-the-art feature extraction methods.Peer reviewe

    Robotic Autism Rehabilitation by Wearable Brain-Computer Interface and Augmented Reality

    Get PDF
    An instrument based on the integration of Brain Computer Interface (BCI) and Augmented Reality (AR) is proposed for robotic autism rehabilitation. Flickering stimuli at fixed frequencies appear on the display of Augmented Reality (AR) glasses. When the user focuses on one of the stimuli a Steady State Visual Evoked Potentials (SSVEP) occurs on his occipital region. A single-channel electroencephalographic Brain Computer Interface detects the elicited SSVEP and sends the corresponding commands to a mobile robot. The device's high wearability (single channel and dry electrodes), and the trainingless usability are fundamental for the acceptance by Autism Spectrum Disorder (ASD) children. Effectively controlling the movements of a robot through a new channel enhances rehabilitation engagement and effectiveness. A case study at an accredited rehabilitation center on 10 healthy adult subjects highlighted an average accuracy higher than 83%. Preliminary further tests at the Department of Translational Medical Sciences of University of Naples Federico II on 3 ASD patients between 8 and 10 years old provided positive feedback on device acceptance and attentional performance

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Understanding learning within a commercial video game: A case study

    Get PDF
    There has been an increasing interest in the debate on the value and relevance using video games for learning. Some of the interest stems from frustration with current educational methods. However, some of this interest also stems from the observations of large numbers of children that play video games. This paper finds that children can learn basic construction skills from playing a video game called World of Goo. The study also employed novel eye-tracking technology to measure endogenous eye blinks and eye gaze fixations. Measures of both these indicators of cognitive processing further suggested that children in the study learned to play the two video games, World of Goo and Bad Piggies. Overall, the results of the study provide further support of the potential for children to learn by playing commercial video games

    Workload-aware systems and interfaces for cognitive augmentation

    Get PDF
    In today's society, our cognition is constantly influenced by information intake, attention switching, and task interruptions. This increases the difficulty of a given task, adding to the existing workload and leading to compromised cognitive performances. The human body expresses the use of cognitive resources through physiological responses when confronted with a plethora of cognitive workload. This temporarily mobilizes additional resources to deal with the workload at the cost of accelerated mental exhaustion. We predict that recent developments in physiological sensing will increasingly create user interfaces that are aware of the user’s cognitive capacities, hence able to intervene when high or low states of cognitive workload are detected. In this thesis, we initially focus on determining opportune moments for cognitive assistance. Subsequently, we investigate suitable feedback modalities in a user-centric design process which are desirable for cognitive assistance. We present design requirements for how cognitive augmentation can be achieved using interfaces that sense cognitive workload. We then investigate different physiological sensing modalities to enable suitable real-time assessments of cognitive workload. We provide empirical evidence that the human brain is sensitive to fluctuations in cognitive resting states, hence making cognitive effort measurable. Firstly, we show that electroencephalography is a reliable modality to assess the mental workload generated during the user interface operation. Secondly, we use eye tracking to evaluate changes in eye movements and pupil dilation to quantify different workload states. The combination of machine learning and physiological sensing resulted in suitable real-time assessments of cognitive workload. The use of physiological sensing enables us to derive when cognitive augmentation is suitable. Based on our inquiries, we present applications that regulate cognitive workload in home and work settings. We deployed an assistive system in a field study to investigate the validity of our derived design requirements. Finding that workload is mitigated, we investigated how cognitive workload can be visualized to the user. We present an implementation of a biofeedback visualization that helps to improve the understanding of brain activity. A final study shows how cognitive workload measurements can be used to predict the efficiency of information intake through reading interfaces. Here, we conclude with use cases and applications which benefit from cognitive augmentation. This thesis investigates how assistive systems can be designed to implicitly sense and utilize cognitive workload for input and output. To do so, we measure cognitive workload in real-time by collecting behavioral and physiological data from users and analyze this data to support users through assistive systems that adapt their interface according to the currently measured workload. Our overall goal is to extend new and existing context-aware applications by the factor cognitive workload. We envision Workload-Aware Systems and Workload-Aware Interfaces as an extension in the context-aware paradigm. To this end, we conducted eight research inquiries during this thesis to investigate how to design and create workload-aware systems. Finally, we present our vision of future workload-aware systems and workload-aware interfaces. Due to the scarce availability of open physiological data sets, reference implementations, and methods, previous context-aware systems were limited in their ability to utilize cognitive workload for user interaction. Together with the collected data sets, we expect this thesis to pave the way for methodical and technical tools that integrate workload-awareness as a factor for context-aware systems.TagtĂ€glich werden unsere kognitiven FĂ€higkeiten durch die Verarbeitung von unzĂ€hligen Informationen in Anspruch genommen. Dies kann die Schwierigkeit einer Aufgabe durch mehr oder weniger Arbeitslast beeinflussen. Der menschliche Körper drĂŒckt die Nutzung kognitiver Ressourcen durch physiologische Reaktionen aus, wenn dieser mit kognitiver Arbeitsbelastung konfrontiert oder ĂŒberfordert wird. Dadurch werden weitere Ressourcen mobilisiert, um die Arbeitsbelastung vorĂŒbergehend zu bewĂ€ltigen. Wir prognostizieren, dass die derzeitige Entwicklung physiologischer Messverfahren kognitive Leistungsmessungen stets möglich machen wird, um die kognitive Arbeitslast des Nutzers jederzeit zu messen. Diese sind in der Lage, einzugreifen wenn eine zu hohe oder zu niedrige kognitive Belastung erkannt wird. Wir konzentrieren uns zunĂ€chst auf die Erkennung passender Momente fĂŒr kognitive UnterstĂŒtzung welche sich der gegenwĂ€rtigen kognitiven Arbeitslast bewusst sind. Anschließend untersuchen wir in einem nutzerzentrierten Designprozess geeignete Feedbackmechanismen, die zur kognitiven Assistenz beitragen. Wir prĂ€sentieren Designanforderungen, welche zeigen wie Schnittstellen eine kognitive Augmentierung durch die Messung kognitiver Arbeitslast erreichen können. Anschließend untersuchen wir verschiedene physiologische MessmodalitĂ€ten, welche Bewertungen der kognitiven Arbeitsbelastung in Realzeit ermöglichen. ZunĂ€chst validieren wir empirisch, dass das menschliche Gehirn auf kognitive Arbeitslast reagiert. Es zeigt sich, dass die Ableitung der kognitiven Arbeitsbelastung ĂŒber Elektroenzephalographie eine geeignete Methode ist, um den kognitiven Anspruch neuartiger Assistenzsysteme zu evaluieren. Anschließend verwenden wir Eye-Tracking, um VerĂ€nderungen in den Augenbewegungen und dem Durchmesser der Pupille unter verschiedenen IntensitĂ€ten kognitiver Arbeitslast zu bewerten. Das Anwenden von maschinellem Lernen fĂŒhrt zu zuverlĂ€ssigen Echtzeit-Bewertungen kognitiver Arbeitsbelastung. Auf der Grundlage der bisherigen Forschungsarbeiten stellen wir Anwendungen vor, welche die Kognition im hĂ€uslichen und beruflichen Umfeld unterstĂŒtzen. Die physiologischen Messungen stellen fest, wann eine kognitive Augmentierung sich als gĂŒnstig erweist. In einer Feldstudie setzen wir ein Assistenzsystem ein, um die erhobenen Designanforderungen zur Reduktion kognitiver Arbeitslast zu validieren. Unsere Ergebnisse zeigen, dass die Arbeitsbelastung durch den Einsatz von Assistenzsystemen reduziert wird. Im Anschluss untersuchen wir, wie kognitive Arbeitsbelastung visualisiert werden kann. Wir stellen eine Implementierung einer Biofeedback-Visualisierung vor, die das NutzerverstĂ€ndnis zum Verlauf und zur Entstehung von kognitiver Arbeitslast unterstĂŒtzt. Eine abschließende Studie zeigt, wie Messungen kognitiver Arbeitslast zur Vorhersage der aktuellen Leseeffizienz benutzt werden können. Wir schließen hierbei mit einer Reihe von Applikationen ab, welche sich kognitive Arbeitslast als Eingabe zunutze machen. Die vorliegende wissenschaftliche Arbeit befasst sich mit dem Design von Assistenzsystemen, welche die kognitive Arbeitslast der Nutzer implizit erfasst und diese bei der DurchfĂŒhrung alltĂ€glicher Aufgaben unterstĂŒtzt. Dabei werden physiologische Daten erfasst, um RĂŒckschlĂŒsse in Realzeit auf die derzeitige kognitive Arbeitsbelastung zu erlauben. Anschließend werden diese Daten analysiert, um dem Nutzer strategisch zu assistieren. Das Ziel dieser Arbeit ist die Erweiterung neuartiger und bestehender kontextbewusster Benutzerschnittstellen um den Faktor kognitive Arbeitslast. Daher werden in dieser Arbeit arbeitslastbewusste Systeme und arbeitslastbewusste Benutzerschnittstellen als eine zusĂ€tzliche Dimension innerhalb des Paradigmas kontextbewusster Systeme prĂ€sentiert. Wir stellen acht Forschungsstudien vor, um die Designanforderungen und die Implementierung von kognitiv arbeitslastbewussten Systemen zu untersuchen. Schließlich stellen wir unsere Vision von zukĂŒnftigen kognitiven arbeitslastbewussten Systemen und Benutzerschnittstellen vor. Durch die knappe VerfĂŒgbarkeit öffentlich zugĂ€nglicher DatensĂ€tze, Referenzimplementierungen, und Methoden, waren Kontextbewusste Systeme in der Auswertung kognitiver Arbeitslast bezĂŒglich der Nutzerinteraktion limitiert. ErgĂ€nzt durch die in dieser Arbeit gesammelten DatensĂ€tze erwarten wir, dass diese Arbeit den Weg fĂŒr methodische und technische Werkzeuge ebnet, welche kognitive Arbeitslast als Faktor in das Kontextbewusstsein von Computersystemen integriert

    Eye Movement and Pupil Measures: A Review

    Get PDF
    Our subjective visual experiences involve complex interaction between our eyes, our brain, and the surrounding world. It gives us the sense of sight, color, stereopsis, distance, pattern recognition, motor coordination, and more. The increasing ubiquity of gaze-aware technology brings with it the ability to track gaze and pupil measures with varying degrees of fidelity. With this in mind, a review that considers the various gaze measures becomes increasingly relevant, especially considering our ability to make sense of these signals given different spatio-temporal sampling capacities. In this paper, we selectively review prior work on eye movements and pupil measures. We first describe the main oculomotor events studied in the literature, and their characteristics exploited by different measures. Next, we review various eye movement and pupil measures from prior literature. Finally, we discuss our observations based on applications of these measures, the benefits and practical challenges involving these measures, and our recommendations on future eye-tracking research directions

    Combining EEG and Eye Tracking: Using Fixation-Locked Potentials in Visual Search

    Get PDF
    Visual search is a complex task that involves many neural pathways to identify relevant areas of interest within a scene. Humans remain a critical component in visual search tasks, as they can effectively perceive anomalies within complex scenes. However, this task can be challenging, particularly under time pressure. In order to improve visual search training and performance, an objective, process-based measure is needed. Eye tracking technology can be used to drive real-time parsing of EEG recordings, providing an indication of the analysis process. In the current study, eye fixations were used to generate ERPs during a visual search task. Clear differences were observed following performance, suggesting that neurophysiological signatures could be developed to prevent errors in visual search tasks

    How neurophysiological measures can be used to enhance the evaluation of remote tower solutions

    Get PDF
    International audienceNew solutions in operational environments are often, among objective measurements, evaluated by using subjective assessment and judgement from experts. Anyhow, it has been demonstrated that subjective measures suffer from poor resolution due to a high intra and inter operator variability. Also, performance measures, if available, could provide just partial information, since an operator could achieve the same performance but experiencing a different workload. In this study we aimed to demonstrate i) the higher resolution of neurophysiological measures in comparison to subjective ones, and ii) how the simultaneous employment of neurophysiological measures and behavioural ones could allow a holistic assessment of operational tools. In this regard, we tested the effectiveness of an EEG-based neurophysiological index (WEEG index) in comparing two different solutions (i.e. Normal and Augmented) in terms of experienced workload. In this regard, 16 professional Air Traffic Controllers (ATCOs) have been asked to perform two operational scenarios. Galvanic Skin Response (GSR) has also been recorded to evaluate the level of arousal (i.e. operator involvement) during the two scenarios execution. NASA-TLX questionnaire has been used to evaluate the perceived workload, and an expert was asked to assess performance achieved by the ATCOs. Finally, reaction times on specific operational events relevant for the assessment of the two solutions, have also been collected. Results highlighted that the Augmented solution induced a local increase in subjects performance (Reaction times). At the same time, this solution induced an increase in the workload experienced by the participants (WEEG). Anyhow, this increase is still acceptable, since it did not negatively impact the performance and has to be intended only as a consequence of the higher engagement of the ATCOs. This behavioural effect is totally in line with physiological results obtained in terms of arousal (GSR), that increased during the scenario with augmentation. Subjective measures (NASA-TLX) did not highlight any significant variation in perceived workload. These results suggest that neurophysiological measure provide additional information than behavioural and subjective ones, even at a level of few seconds, and its employment during the pre-operational activities (e.g. design process) could allow a more holistic and accurate evaluation of new solutions
    • 

    corecore