15 research outputs found

    Using Variable Dwell Time to Accelerate Gaze-Based Web Browsing with Two-Step Selection

    Full text link
    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose a gaze-based browser using a two-step selection policy with variable dwell time. In the first step, a command, e.g. "back" or "select", is chosen from a menu using a dwell time that is constant across the different commands. In the second step, if the "select" command is chosen, the user selects a hyperlink using a dwell time that varies between different hyperlinks. We assign shorter dwell times to more likely hyperlinks and longer dwell times to less likely hyperlinks. In order to infer the likelihood each hyperlink will be selected, we have developed a probabilistic model of natural gaze behavior while surfing the web. We have evaluated a number of heuristic and probabilistic methods for varying the dwell times using both simulation and experiment. Our results demonstrate that varying dwell time improves the user experience in comparison with fixed dwell time, resulting in fewer errors and increased speed. While all of the methods for varying dwell time resulted in improved performance, the probabilistic models yielded much greater gains than the simple heuristics. The best performing model reduces error rate by 50% compared to 100ms uniform dwell time while maintaining a similar response time. It reduces response time by 60% compared to 300ms uniform dwell time while maintaining a similar error rate.Comment: This is an Accepted Manuscript of an article published by Taylor & Francis in the International Journal of Human-Computer Interaction on 30 March, 2018, available online: http://www.tandfonline.com/10.1080/10447318.2018.1452351 . For an eprint of the final published article, please access: https://www.tandfonline.com/eprint/T9d4cNwwRUqXPPiZYm8Z/ful

    Gaze-Driven Video Games as Vision Training: A Case Study in Cerebral Palsy

    Get PDF
    Cerebral Palsy is a disorder that primarily affects motor control, but frequently impacts gaze behavior as well. Due to the primary therapeutic emphasis on motor symptoms, there is a dearth of therapies available for gaze behavior in Cerebral Palsy. Based on research suggesting that video games and Augmented Reality have been useful for improvement of gaze behavior and rehabilitation for other impaired individuals, this case study applies a set of therapeutic gaze-dependent Augmented Reality video games to an adolescent male with Spastic Diplegic Cerebral Palsy. The video games were determined to be a good fit for the participant by the specificity of their incorporated training principles targeting fixation and saccadic control.The participant underwent training using the video games in order to determine their effects on fixation and saccadic control, the results of which indicate practice-dependent improvements. Further, results support the participant’s ability to engage with gaze-driven accessibility software, providing for augmentation of his communication options

    Estimating Cognitive Workload in an Interactive Virtual Reality Environment Using EEG

    Get PDF
    With the recent surge of affordable, high-performance virtual reality (VR) headsets, there is unlimited potential for applications ranging from education, to training, to entertainment, to fitness and beyond. As these interfaces continue to evolve, passive user-state monitoring can play a key role in expanding the immersive VR experience, and tracking activity for user well-being. By recording physiological signals such as the electroencephalogram (EEG) during use of a VR device, the user\u27s interactions in the virtual environment could be adapted in real-time based on the user\u27s cognitive state. Current VR headsets provide a logical, convenient, and unobtrusive framework for mounting EEG sensors. The present study evaluates the feasibility of passively monitoring cognitive workload via EEG while performing a classical n-back task in an interactive VR environment. Data were collected from 15 participants and the spatio-spectral EEG features were analyzed with respect to task performance. The results indicate that scalp measurements of electrical activity can effectively discriminate three workload levels, even after suppression of a co-varying high-frequency activity

    Augmented Sustainability Reports: A Design Science Approach

    Get PDF
    Sustainability reports provide stakeholders with information about a company’s efforts to balance its economic, ecological and social goals. Because of their influence on a company’s image as well as on the customers’ buying and shareholders’ investment decisions, sustainability reports are an integral part of today’s corporate online communication. Following a design science research approach, this paper describes the design, prototypical implementation and evaluation of augmented sustainability reports. In contrast to traditional PDF- or print media-based sustainability reports, augmented sustainability reports contain multimedia contextual information that is displayed depending on the user’s gaze position. In our prototype the gaze position is simulated using mouse tracking. The comparative evaluation of the prototype was conducted via a quantitative questionnaire based on the technology acceptance model (TAM). Additionally, qualitative feedback was gathered during the course of the evaluation. Traditional and augmented sustainability reports were compared on the basis of the questionnaire results which reveal room for improvement of the prototype as well as possible starting points for future research. Overall, the evaluation results indicate that our test users had a strong preference for the augmented sustainability report compared to the PDF-based report even though both alternatives had identical content

    Controlling a Mouse Pointer with a Single-Channel EEG Sensor

    Get PDF
    Goals: The purpose of this study was to analyze the feasibility of using the information obtained from a one-channel electro-encephalography (EEG) signal to control a mouse pointer. We used a low-cost headset, with one dry sensor placed at the FP1 position, to steer a mouse pointer and make selections through a combination of the user’s attention level with the detection of voluntary blinks. There are two types of cursor movements: spinning and linear displacement. A sequence of blinks allows for switching between these movement types, while the attention level modulates the cursor’s speed. The influence of the attention level on performance was studied. Additionally, Fitts’ model and the evolution of the emotional states of participants, among other trajectory indicators, were analyzed. (2) Methods: Twenty participants distributed into two groups (Attention and No-Attention) performed three runs, on different days, in which 40 targets had to be reached and selected. Target positions and distances from the cursor’s initial position were chosen, providing eight different indices of difficulty (IDs). A self-assessment manikin (SAM) test and a final survey provided information about the system’s usability and the emotions of participants during the experiment. (3) Results: The performance was similar to some brain–computer interface (BCI) solutions found in the literature, with an averaged information transfer rate (ITR) of 7 bits/min. Concerning the cursor navigation, some trajectory indicators showed our proposed approach to be as good as common pointing devices, such as joysticks, trackballs, and so on. Only one of the 20 participants reported difficulty in managing the cursor and, according to the tests, most of them assessed the experience positively. Movement times and hit rates were significantly better for participants belonging to the attention group. (4) Conclusions: The proposed approach is a feasible low-cost solution to manage a mouse pointe

    Estimating Cognitive Workload in an Interactive Virtual Reality Environment Using Electrophysiological and Kinematic Activity

    Get PDF
    As virtual reality (VR) technology continues to gain prominence in commercial, educational, recreational and research applications, there is increasing interest in incorporating physiological sensors in VR devices for passive user-state monitoring to eventually increase the sense of immersion. By recording physiological signals such as the electroencephalogram (EEG), electromyography (EMG) or kinematic parameters during the use of a VR device, the user’s interactions in the virtual environment could be adapted in real time based on the user’s cognitive state. This dissertation evaluates the feasibility of passively monitoring cognitive workload via electrophysiological and kinematic activity while performing a classical n-back task in an interactive VR environment. The results indicate that scalp measurements of electrical activity and controller and headset tracking of kinematic activity can effectively discriminate three workload levels. Since motion and muscle tension can create co-varying task-related artifacts in EEG sensors mounted to the VR headset, decontamination algorithms were developed. The newly developed warp correlation filter (WCF) and linear regression denoising were applied on EEG, which could significantly decrease the influence of these artifacts. Analysis of the scalp recorded spectrum suggest two transient activity (termed pulse-decay effects) that impact feature extraction, modeling, and overall interpretation of workload estimation from scalp recordings. The best classification accuracy could be achieved by combining EMG, EEG and kinematic activity features using an artificial neural network (ANN)

    Studies on the impact of assistive communication devices on the quality of life of patients with amyotrophic lateral sclerosis

    Get PDF
    Tese de doutoramento, Ciências Biomédicas (Neurociências), Universidade de Lisboa, Faculdade de Medicina, 2016Amyotrophic Lateral Sclerosis (ALS) is a progressive neuromuscular disease with rapid and generalized degeneration of motor neurons. Patients with ALS experiment a relentless decline in functions that affect performance of most activities of daily living (ADL), such as speaking, eating, walking or writing. For this reason, dependence on caregivers grows as the disease progresses. Management of the respiratory system is one of the main concerns of medical support, since respiratory failure is the most common cause of death in ALS. Due to increasing muscle weakness, most patients experience dramatic decrease of speech intelligibility and difficulties in using upper limbs (UL) for writing. There is growing evidence that mild cognitive impairment is common in ALS, but most patients are self-conscious of their difficulties in communicating and, in very severe stages, locked-in syndrome can occur. When no other resources than speech and writing are used to assist communication, patients are deprived of expressing needs or feelings, making decisions and keeping social relationships. Further, caregivers feel increased dependence due to difficulties in communication with others and get frustrated about difficulties in understanding partners’ needs. Support for communication is then very important to improve quality of life of both patients and caregivers; however, this has been poorly investigated in ALS. Assistive communication devices (ACD) can support patients by providing a diversity of tools for communication, as they progressively lose speech. ALS, in common with other degenerative conditions, introduces an additional challenge for the field of ACD: as the disease progresses, technologies must adapt to different conditions of the user. In early stages, patients may need speech synthesis in a mobile device, if dysarthria is one of the initial symptoms, or keyboard modifications, as weakness in UL increases. When upper limbs’ dysfunction is high, different input technologies may be adapted to capture voluntary control (for example, eye-tracking devices). Despite the enormous advances in the field of Assistive Technologies, in the last decade, difficulties in clinical support for the use of assistive communication devices (ACD) persist. Among the main reasons for these difficulties are lack of assessment tools to evaluate communication needs and determine proper input devices and to indicate changes over disease progression, and absence of clinical evidence that ACD has relevant impact on the quality of life of affected patients. For this set of reasons, support with communication tools is delayed to stages where patients are severely disabled. Often in these stages, patients face additional clinical complications and increased dependence on their caregivers’ decisions, which increase the difficulty in adaptation to new communication tools. This thesis addresses the role of assistive technologies in the quality of life of early-affected patients with ALS. Also, it includes the study of assessment tools that can improve longitudinal evaluation of communication needs of patients with ALS. We longitudinally evaluated a group of 30 patients with bulbar-onset ALS and 17 caregivers, during 2 to 29 months. Patients were assessed during their regular clinical appointments, in the Hospital de Santa Maria-Centro Hospitalar Lisboa_Norte. Evaluation of patients was based on validated instruments for assessing the Quality of Life (QoL) of patients and caregivers, and on methodologies for recording communication and measuring its performance (including speech, handwriting and typing). We tested the impact of early support with ACD on the QoL of patients with ALS, using a randomized, prospective, longitudinal design. Patients were able to learn and improve their skills to use communication tools based on electronic assistive devices. We found a positive impact of ACD in psychological and wellbeing domains of quality of life in patients, as well as in the support and psychological domains in caregivers. We also studied performance of communication (words per minute) using UL. Performance in handwriting may decline faster than performance in typing, supporting the idea that the use of touchscreen-based ACD supports communication for longer than handwriting. From longitudinal recordings of speech and typing activity we could observe that ACD can support tools to detect early markers of bulbar and UL dysfunction in ALS. Methodologies that were used in this research for recording and assessing function in communication can be replicated in the home environment and form part of the original contributions of this research. Implementation of remote monitoring tools in daily use of ACD, based on these methodologies, is discussed. Considering those patients who receive late support for the use of ACD, lack of time or daily support to learn how to control complex input devices may hinder its use. We developed a novel device to explore the detection and control of various residual movements, based on sensors of accelerometry, electromyography and force, as input signals for communication. The aim of this input device was to develop a tool to explore new communication channels in patients with generalized muscle weakness. This research contributed with novel tools from the Engineering field to the study of assistive communication in patients with ALS. Methodologies that were developed in this work can be further applied to the study of the impact of ACD in other neurodegenerative diseases that affect speech and motor control of UL
    corecore