549 research outputs found

    A longitudinal study of text entry by gazing and smiling

    Get PDF
    Face Interface is a wearable device that combines the use of voluntary gaze direction and facial activations, for pointing and selecting objects on a computer screen, respectively. In this thesis a longitudinal study for entering text using Face Interface is presented. The aim of the study was to investigate entering text with Face Interface within a longer period of time. Twelve voluntary participants took part in an experiment that consisted of ten 15-minutes long sessions. The task of the participant in each session was to write text in fifteen minutes with Face Interface and an onscreen keyboard. Writing was done by pointing at the characters by gaze and selected by smiling. The results showed that the overall mean text entry rate for all sessions was 5.39 words per minute (wpm). In the first session the overall mean text entry rate was 3.88 wpm, and in the tenth session 6.59 wpm. The overall mean minimum string distance (MSD) error rate for all sessions was 0.25. In the first session the overall mean MSD error rate was 0.50 and in the tenth session 0.05. The overall mean keystrokes per character (KSPC) value for all sessions was 1.18. In the first session the overall mean KSPC value was 1.26 and in the tenth session 1.2. Subjective ratings showed that Face Interface was easy to use. The rating of the overall operation of Face Interface was 5.9/7.0 in the tenth session. Subjective ratings were positive in all categories in the tenth session

    Artificial Intelligence for Suicide Assessment using Audiovisual Cues: A Review

    Get PDF
    Death by suicide is the seventh leading death cause worldwide. The recent advancement in Artificial Intelligence (AI), specifically AI applications in image and voice processing, has created a promising opportunity to revolutionize suicide risk assessment. Subsequently, we have witnessed fast-growing literature of research that applies AI to extract audiovisual non-verbal cues for mental illness assessment. However, the majority of the recent works focus on depression, despite the evident difference between depression symptoms and suicidal behavior and non-verbal cues. This paper reviews recent works that study suicide ideation and suicide behavior detection through audiovisual feature analysis, mainly suicidal voice/speech acoustic features analysis and suicidal visual cues. Automatic suicide assessment is a promising research direction that is still in the early stages. Accordingly, there is a lack of large datasets that can be used to train machine learning and deep learning models proven to be effective in other, similar tasks.Comment: Manuscript submitted to Arificial Intelligence Reviews (2022

    A Survey of Interaction Techniques and Devices for Large High Resolution Displays

    Get PDF
    Innovations in large high-resolution wall-sized displays have been yielding benefits to visualizations in industry and academia, leading to a rapidly growing increase of their implementations. In scenarios such as these, the displayed visual information tends to be larger than the users field of view, hence the necessity to move away from traditional interaction methods towards more suitable interaction devices and techniques. This paper aspires to explore the state-of-the-art with respect to such technologies for large high-resolution displays

    Integrated electromyogram and eye-gaze tracking cursor control system for computer users with motor disabilities

    Get PDF
    This research pursued the conceptualization, implementation, and testing of a system that allows for computer cursor control without requiring hand movement. The target user group for this system are individuals who are unable to use their hands because of spinal dysfunction or other afflictions. The system inputs consisted of electromyogram (EMG) signals from muscles in the face and point-of-gaze coordinates produced by an eye-gaze tracking (EGT) system. Each input was processed by an algorithm that produced its own cursor update information. These algorithm outputs were fused to produce an effective and efficient cursor control. Experiments were conducted to compare the performance of EMG/EGT, EGT-only, and mouse cursor controls. The experiments revealed that, although EMG/ EGT control was slower than EGT-only and mouse control, it effectively controlled the cursor without a spatial accuracy limitation and also facilitated a reliable click operation

    Development of an Eye-Gaze Input System With High Speed and Accuracy through Target Prediction Based on Homing Eye Movements

    Get PDF
    In this study, a method to predict a target on the basis of the trajectory of eye movements and to increase the pointing speed while maintaining high predictive accuracy is proposed. First, a predictive method based on ballistic (fast) eye movements (Approach 1) was evaluated in terms of pointing speed and predictive accuracy. In Approach 1, the so-called Midas touch problem (pointing to an unintended target) occurred, particularly when a small number of samples was used to predict a target. Therefore, to overcome the poor predictive accuracy of Approach 1, we developed a new predictive method (Approach 2) using homing (slow) eye movements rather than ballistic (fast) eye movements. Approach 2 overcame the disadvantage (inaccurate prediction) of Approach 1 by shortening the pointing time while maintaining high predictive accuracy

    Evaluation of head-free eye tracking as an input device for air traffic control

    Get PDF
    International audienceThe purpose of this study was to investigate the possibility to integrate a free head motion eye-tracking system as input device in air traffic control (ATC) activity. Sixteen participants used an eye tracker to select targets displayed on a screen as quickly and accurately as possible. We assessed the impact of the presence of visual feedback about gaze position and the method of target selection on selection performance under different difficulty levels induced by variations in target size and target-to-target separation. We tend to consider that the combined use of gaze dwell-time selection and continuous eye-gaze feedback was the best condition as it suits naturally with gaze displacement over the ATC display and free the hands of the controller, despite a small cost in terms of selection speed. In addition, target size had a greater impact on accuracy and selection time than target distance. These findings provide guidelines on possible further implementation of eye tracking in ATC everyday activity

    Influences of dwell time and cursor control on the performance in gaze driven typing

    Get PDF
    In gaze controlled computer interfaces the dwell time is often used as selection criterion. But this solution comes along with several problems, especially in the temporal domain: Eye movement studies on scene perception could demonstrate that fixations of different durations serve different purposes and should therefore be differentiated. The use of dwell time for selection implies the need to distinguish intentional selections from merely per-ceptual processes, described as the Midas touch problem. Moreover, the feedback of the actual own eye position has not yet been addressed to systematic studies in the context of usability in gaze based computer interaction. We present research on the usability of a simple eye typing set up. Different dwell time and eye position feedback configurations were tested. Our results indicate that smoothing raw eye position and temporal delays in visual feedback enhance the system's functionality and usability. Best overall performance was obtained with a dwell time of 500 ms
    • …
    corecore