1,276 research outputs found

    Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing

    Full text link
    Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the user's cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability.Comment: 6 pages, 4 figures, IEEE CBMS 201

    Understanding Adoption Barriers to Dwell-Free Eye-Typing: Design Implications from a Qualitative Deployment Study and Computational Simulations

    Get PDF
    Eye-typing is a slow and cumbersome text entry method typically used by individuals with no other practical means of communication. As an alternative, prior HCI research has proposed dwell-free eye-typing as a potential improvement that eliminates time-consuming and distracting dwell-timeouts. However, it is rare that such research ideas are translated into working products. This paper reports on a qualitative deployment study of a product that was developed to allow users access to a dwell-free eye-typing research solution. This allowed us to understand how such a research solution would work in practice, as part of users\u27 current communication solutions in their own homes. Based on interviews and observations, we discuss a number of design issues that currently act as barriers preventing widespread adoption of dwell-free eye-typing. The study findings are complemented with computational simulations in a range of conditions that were inspired by the findings in the deployment study. These simulations serve to both contextualize the qualitative findings and to explore quantitative implications of possible interface redesigns. The combined analysis gives rise to a set of design implications for enabling wider adoption of dwell-free eye-typing in practice

    Dwell-free input methods for people with motor impairments

    Full text link
    Millions of individuals affected by disorders or injuries that cause severe motor impairments have difficulty performing compound manipulations using traditional input devices. This thesis first explores how effective various assistive technologies are for people with motor impairments. The following questions are studied: (1) What activities are performed? (2) What tools are used to support these activities? (3) What are the advantages and limitations of these tools? (4) How do users learn about and choose assistive technologies? (5) Why do users adopt or abandon certain tools? A qualitative study of fifteen people with motor impairments indicates that users have strong needs for efficient text entry and communication tools that are not met by existing technologies. To address these needs, this thesis proposes three dwell-free input methods, designed to improve the efficacy of target selection and text entry based on eye-tracking and head-tracking systems. They yield: (1) the Target Reverse Crossing selection mechanism, (2) the EyeSwipe eye-typing interface, and (3) the HGaze Typing interface. With Target Reverse Crossing, a user moves the cursor into a target and reverses over a goal to select it. This mechanism is significantly more efficient than dwell-time selection. Target Reverse Crossing is then adapted in EyeSwipe to delineate the start and end of a word that is eye-typed with a gaze path connecting the intermediate characters (as with traditional gesture typing). When compared with a dwell-based virtual keyboard, EyeSwipe affords higher text entry rates and a more comfortable interaction. Finally, HGaze Typing adds head gestures to gaze-path-based text entry to enable simple and explicit command activations. Results from a user study demonstrate that HGaze Typing has better performance and user satisfaction than a dwell-time method

    Exploring Human Computer Interaction and its Implications on Modeling for Individuals with Disabilities

    Get PDF
    Computers provide an interface to the world for many individuals with disabilities and without effective computer access, quality of life may be severely diminished. As a result of this dependence, optimal human computer interaction (HCI) between a user and their computer is of paramount importance. Optimal HCI for individuals with disabilities relies on both the existence of products which provide the desired functionality and the selection of appropriate products and training methods for a given individual. From a product availability standpoint, optimal HCI often depends on modeling techniques used during the development process to evaluate a design, assess usability and predict performance. Computer access evaluations are often too brief in duration and depend on the products present at the site of the evaluation. Models could assist clinicians in dealing with the problems of limited time with clients, limited products for the client to trial, and the seemingly unlimited system configurations available with many potential solutions. Current HCI modeling techniques have been developed and applied to the performance of able-bodied individuals. Research concerning modeling performance for individuals with disabilities has been limited. This study explores HCI as it applies to both able-bodied and individuals with disabilities. Eleven participants (5 able-bodied / 6 with disabilities) were recruited and asked to transcribe sentences presented by a text entry interface supporting word prediction with the use of an on-screen keyboard while time stamped keystroke and eye fixation data was collected. Data was examined to identify sequences of behavior, performance changes based on experience, and performance differences between able-bodied and participants with disabilities. The feasibility of creating models based on the collected data was explored. A modeling technique must support selection from multiple sequences of behavior to perform a particular type of action and variation in execution time for primitive operations in addition to handling errors. The primary contributions made by this study were knowledge gained relative to the design of the test bench and experimental protocol

    Eye typing in application: A comparison of two systems with ALS patients

    Get PDF
    A variety of eye typing systems has been developed during the last decades. Such systems can provide support for people who lost the ability to communicate, e.g. patients suffering from motor neuron diseases such as amyotrophic lateral sclerosis (ALS). In the current retrospective analysis, two eye typing applications were tested (EyeGaze, GazeTalk) by ALS patients (N = 4) in order to analyze objective performance measures and subjective ratings. An advantage of the EyeGaze system was found for most of the evaluated criteria. The results are discussed in respect of the special target population and in relation to requirements of eye tracking devices

    Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry

    Get PDF
    In natural course, human beings usually make use of multi-sensory modalities for effective communication or efficiently executing day-to-day tasks. For instance, during verbal conversations we make use of voice, eyes, and various body gestures. Also effective human-computer interaction involves hands, eyes, and voice, if available. Therefore by combining multi-sensory modalities, we can make the whole process more natural and ensure enhanced performance even for the disabled users. Towards this end, we have developed a multi-modal human-computer interface (HCI) by combining an eye-tracker with a soft-switch which may be considered as typically representing another modality. This multi-modal HCI is applied for text entry using a virtual keyboard appropriately designed in-house, facilitating enhanced performance. Our experimental results demonstrate that using multi-modalities for text entry through the virtual keyboard is more efficient and less strenuous than single modality system and also solves the Midas-touch problem, which is inherent in an eye-tracker based HCI system where only dwell time is used for selecting a character

    HGaze Typing: head-gesture assisted gaze typing

    Get PDF
    This paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of head gestures with the speed of gaze inputs to provide efficient and comfortable dwell-free text entry. HGaze Typing uses gaze path information to compute candidate words and allows explicit activation of common text entry commands, such as selection, deletion, and revision, by using head gestures (nodding, shaking, and tilting). By adding a head-based input channel, HGaze Typing reduces the size of the screen regions for cancel/deletion buttons and the word candidate list, which are required by most eye-typing interfaces. A user study finds HGaze Typing outperforms a dwell-time-based keyboard in efficacy and user satisfaction. The results demonstrate that the proposed method of integrating gaze and head-movement inputs can serve as an effective interface for text entry and is robust to unintended selections.https://dl.acm.org/doi/pdf/10.1145/3448017.3457379Published versio

    Gaze Controlled Applications and Optical-See-Through Displays - General Conditions for Gaze Driven Companion Technologies

    Get PDF
    Gaze based human-computer-interaction has been a research topic for over a quarter century. Since then, the main scenario for gaze interaction has been helping handicapped people to communicate an interact with their environment. With the rapid development of mobile and wearable display technologies, a new application field for gaze interaction has appeared, opening new research questions. This thesis investigates the feasibility of mobile gaze based interaction, studying deeply the use of pie menus as a generic and robust widget for gaze interaction as well as visual and perceptual issues on head mounted (wearable) optical see-through displays. It reviews conventional gaze-based selection methods and investigates in detail the use of pie menus for gaze control. It studies and discusses layout issues, selection methods and applications. Results show that pie menus can allocate up to six items in width and multiple depth layers, allowing a fast and accurate navigation through hierarchical levels by using or combining multiple selection methods. Based on these results, several text entry methods based on pie menus are proposed. Character-by-character text entry, text entry with bigrams and with text entry with bigrams derived by word prediction, as well as possible selection methods, are examined in a longitudinal study. Data showed large advantages of the bigram entry methods over single character text entry in speed and accuracy. Participants preferred the novel selection method based on saccades (selecting by borders) over the conventional and well established dwell time method. On the one hand, pie menus showed to be a feasible and robust widget, which may enable the efficient use of mobile eye tracking systems that may not be accurate enough for controlling elements on conventional interface. On the other hand, visual perception on mobile displays technologies need to be examined in order to deduce if the mentioned results can be transported to mobile devices. Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load on the respective systems. We investigated visual performance on participants with a visual search tasks and dual tasks presenting visual stimuli on the optical see-through device, only on a computer screen, and simultaneously on both devices. Results showed that switching between the presentation devices (i.e. perceiving information simultaneously from both devices) produced costs in visual performance. The implications of these costs and of further perceptual and technical factors for mobile gaze-based interaction are discussed and solutions are proposed.Blickbasierte Mensch-Computer-Interaktion ist seit einem viertel Jahrhundert ein relevantes Forschungsthema. Der überwiegende Einsatz von Blicksteuerung beschränkte sich darauf, das Menschen mit Behinderungen kommunizieren können. In dieser Form, können z.B. ALS Patienten allein durch ihren Blickbewegungen Texte schreiben, Rollstühle bewegen und Bedürfnisse mitteilen. Durch die rasante Entwicklung von mobilen Endgeräten und tragbaren Displaytechnologien, öffnete sich ein neues Anwendungsfeld und damit neue Forschungsfragen. Im Rahmen dieser Dissertation wurden grundlegende Interaktionsmöglichkeiten mittels Blicksteuerung entwickelt und erforscht, die das gesamte Potential von Blickbewegungen als Eingabemöglchkeit ausnutzen. Blicksteuerung charakterisiert sich dadurch, dass sie die schnellste motorische Bewegung ist und unwillkürlich gesteuert wird. Sie bildet damit Aufmerksamkeitsprozesse ab. So kann der Blick nicht nur als Mittel zur Eingabe dienlich sein, sondern er verrät auch etwas über die Intentionen und Motive des Nutzers. Dies für die Rechnersteuerung zu nutzen, kann die Eingabe mittels Blicken überaus einfach und effizient machen und zwar nicht nur für motorisch beeinträchtigte, sondern auch für gesunde Nutzer. Diese These erforscht die Machbarkeit von mobiler Blicksteuerung. Sie untersucht im Detail den Einsatz von Pie Menüs als generisches und robustes Widget für Blicksteuerung, sowie visuelle und wahrnehmungspsychologische Aspekte bei der Nutzung von mobilen optischen Datenbrillen. Diese Arbeit fasst konventionelle blickbasierte Interaktionsmethoden zusammen und untersucht im Detail die Verwendung von Pie Menüs für Blicksteuerung. Es erforscht und diskutiert Layout-Probleme, Auswahl, Methoden und Anwendungen von Pie Menüs. Die Ergebnisse zeigen, dass Pie Menüs bis zu sechs Elemente in Breite und Tiefe, in mehreren Schichten zuordnen können, so dass eine schnelle und präzise Navigation durch die hierarchischen Ebenen gewährleistet ist. Durch die Nutzung oder die Kombination mehrerer Selektionsmethoden, kann eine effiziente und effektive Interaktion gewährleistet werden. Gestützt von diesen Ergebnissen, wurden diverse auf Pie Menüs basierte Texteingabesysteme entwickelt. Diese Texteingabesysteme bauen auf der Eingabe von Einzelbuchstaben, Bigrammen, und vorhersgesagten Wörter. Die genannten Systeme sowie zwei Selektionsmethoden der blickbasierten Interaktion wurden untersucht. Die Ergebnisse zeigen signifikante Unterschiede bei der Geschwindigkeit und Genauigkeit der Texteingabe zugunsten des auf Bigrammen basierten Texteingabesystems, im direkten Vergleich zur Methode der einzelnen Buchstabeneingabe. Die Probanden präferierten, die neue aus Sakkaden basierte Selektionsmethode, über die konventionelle und gut etablierte Schwellzeit (dwell time) Methode. Pie Menüs erwiesen sich als ein praktikabel und robustes Widget, dass die effiziente Nutzung von mobilen Eye-Tracking-Systemen und Displays, auch bei geringer Genauigkeit, ermöglichen kann. Nichts desto trotz, muss die visuelle Wahrnehmung in mobilen optischen Datenbrillen untersucht werden, um die Übertragbarkeit der bereits benannten Befunde für Datenbrillen sicher zu stellen. Ziel der AR-Ausgabegeräte ist es, die reale Umgebung mit virtueller Information anzureichern. Die Vorstellung dabei ist, dass die virtuelle Information sich in die reale Umgebung einfügt, d.h., dass Betrachter die virtuelle und reale Information zu einem Bild integrieren. Aus psychologischer Perspektive ist es einerseits plausibel, dass Informationen in raum-zeitlicher Nachbarschaft zusammenzufügen sind. Andererseits kann eine vollständige Integration nur dann erfolgen, wenn die Darbietung als einheitlich wahrgenommen werden kann. Dagegen sprechen allerdings zwei grundlegende Punkte; zum einen das Selbstleuchten der virtuellen Information, zum anderen deren Größenund Entfernungshinweise. Das Selbstleuchten der per AR-Gerät eingeblendeten Information ist deshalb problematisch, weil dieses Merkmal bei realen Objekten kaum vorhanden ist. Insofern sollte eine vollständige Integration der Information von dem AR-Gerät und anderer Information höchstens dann erfolgen können, wenn es sich bei der realen Information um Reize auf einem Computermonitor handelt, die ebenfalls selbstleuchtend sind. Für andere reale Objekte sollte die Leuchtstärke der eingeblendeten Information allein ein wesentliches Unterscheidungsmerkmal darstellen. Ein weiteres wichtiges Unterscheidungsmerkmal ist die Größeninformation, die einen bedeutenden Anteil an der Entfernungsschätzung hat: In unserer realen Welt werden Objekte, die sich vom Betrachter entfernen, auf zunehmend kleinere retinale Areale projiziert. Gleich große Objekte müssen also mit zunehmender Betrachtungsdistanz kleiner abgebildet sein. Bei der AR- Technologie werden nun Objekte, wie bei einem Nachbild, in konstanter retinaler Größe projiziert, unabhängig davon, wo in der Tiefe sie gerade lokalisiert werden. Da die Objekte üblicherweise auf bzw. vor dem nächsten Hintergrund wahrgenommen werden, sind Größeninformationen der virtuellen Objekte nicht zur Entfernungswahrnehmung zu verwenden. Sie führen sogar teilweise zu widersprüchlichen Tiefenhinweisen, wenn die Distanz zu einem Hintergrund vergrößert wird, die Reize aber die gleichen retinalen Areale beanspruchen und somit bei größer werdender Entfernung als größer wahrgenommen werden. Für diese These wurden drei Versuchsanordnungen entwickelt, mit denen jeweils bestimmte Aspekte der gleichzeitigen Wahrnehmung von Information auf einem AR-Gerät und realen Umwelt detailliert untersucht wurden. Die Ergebnisse zeigten, dass die gleichzeitige Wahrnehmung von Information aus realen und virtuellen Medien mit kosten in der Sehleistung verbunden ist. Weitere Untersuchungen zeigten, dass das visuelle System bei der gleichzeitigen Darbietung von virtuellen und realen Reizen ständig die Einstellung von Vergenz und Akkomodation ändern muss. Dies könnte die visuelle Beanspruchung erklären, die in zahlreichen Studien beobachtet wurde. Die Auswirkungen der genannten Wechselkosten, Wahrnehmungs- und technischen Faktoren für mobile blickbasierte Interaktion werden hier diskutiert und Lösungen werden vorgeschlagen
    corecore