8 research outputs found

    On object selection in gaze controlled environments

    Get PDF
    In the past twenty years, gaze control has become a reliable alternative input method not only for handicapped users. The selection of objects, however, which is of highest importance and of highest frequency in computer control, requires explicit control not inherent in eye movements. Objects have been therefore usually selected via prolonged fixations (dwell times). Dwell times seemed to be for many years the unique reliable method for selection. In this paper, we review pros and cons of classical selection methods and novel metaphors, which are based on pies and gestures. The focus is on the effectiveness and efficiency of selections. In order to estimate the potential of current suggestions for selection, a basic empirical comparison is recommended

    Voiceye: A Multimodal Inclusive Development Environment

    Get PDF
    People with physical impairments who are unable to use traditional input devices (i.e. mouse and keyboard) are often excluded from technical professions (e.g. web development). Alternative input methods such as eye gaze tracking and speech recognition have become more readily available in recent years with both being explored independently to support people with physical impairments in coding activities. This paper describes a novel multimodal application (“Voiceye”) that combines voice input, gaze interaction, and mechanical switches as an alternative approach for writing code. The system was evaluated with non-disabled participants who have coding experience (N=29) to assess the feasibility of the application in writing HTML and CSS code. Results found that Voiceye was perceived positively and enabled successful completion of coding tasks. A follow-up study with disabled participants (N=5) demonstrated that this method of multimodal interaction can support people with physical impairments in writing and editing code

    Magilock: a reliable control triggering method in multi-channel eye-control systems

    Get PDF
    Eye-tracking technology brings a different human-computer interaction experience to users because of its intuitive, natural, and hands-free operation characteristics. Avoiding the Midas touch problem and improving the accuracy of interaction are among the main goals of the research and development of eye-control systems. This study reviews the methods and limitations of research on avoiding the Midas touch problem. For typical control clicking operations with low fault tolerance, such as mode switching and state selection in an eye-control system, this study proposes Magilock, a more reliable control triggering method with a high success rate in multi-channel eye-control systems. Magilock adds a control pre-locked mechanism between the two interactive steps of eye-control channel positioning control and other interactive channel triggering controls in the multi-channel eye-control system. This effectively avoids incorrect control triggering caused by multi-channel coordination disorder and gaze-point drift. This study also conducted ergonomic experiments to explore the lock and unlock times of the control pre-locked mechanism in Magilock. Taking into account the experimental data and subjective evaluation of the participants, we recommend setting the lock time and the unlock time of Magilock to 200 ms

    Gaze Controlled Applications and Optical-See-Through Displays - General Conditions for Gaze Driven Companion Technologies

    Get PDF
    Gaze based human-computer-interaction has been a research topic for over a quarter century. Since then, the main scenario for gaze interaction has been helping handicapped people to communicate an interact with their environment. With the rapid development of mobile and wearable display technologies, a new application field for gaze interaction has appeared, opening new research questions. This thesis investigates the feasibility of mobile gaze based interaction, studying deeply the use of pie menus as a generic and robust widget for gaze interaction as well as visual and perceptual issues on head mounted (wearable) optical see-through displays. It reviews conventional gaze-based selection methods and investigates in detail the use of pie menus for gaze control. It studies and discusses layout issues, selection methods and applications. Results show that pie menus can allocate up to six items in width and multiple depth layers, allowing a fast and accurate navigation through hierarchical levels by using or combining multiple selection methods. Based on these results, several text entry methods based on pie menus are proposed. Character-by-character text entry, text entry with bigrams and with text entry with bigrams derived by word prediction, as well as possible selection methods, are examined in a longitudinal study. Data showed large advantages of the bigram entry methods over single character text entry in speed and accuracy. Participants preferred the novel selection method based on saccades (selecting by borders) over the conventional and well established dwell time method. On the one hand, pie menus showed to be a feasible and robust widget, which may enable the efficient use of mobile eye tracking systems that may not be accurate enough for controlling elements on conventional interface. On the other hand, visual perception on mobile displays technologies need to be examined in order to deduce if the mentioned results can be transported to mobile devices. Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load on the respective systems. We investigated visual performance on participants with a visual search tasks and dual tasks presenting visual stimuli on the optical see-through device, only on a computer screen, and simultaneously on both devices. Results showed that switching between the presentation devices (i.e. perceiving information simultaneously from both devices) produced costs in visual performance. The implications of these costs and of further perceptual and technical factors for mobile gaze-based interaction are discussed and solutions are proposed.Blickbasierte Mensch-Computer-Interaktion ist seit einem viertel Jahrhundert ein relevantes Forschungsthema. Der überwiegende Einsatz von Blicksteuerung beschränkte sich darauf, das Menschen mit Behinderungen kommunizieren können. In dieser Form, können z.B. ALS Patienten allein durch ihren Blickbewegungen Texte schreiben, Rollstühle bewegen und Bedürfnisse mitteilen. Durch die rasante Entwicklung von mobilen Endgeräten und tragbaren Displaytechnologien, öffnete sich ein neues Anwendungsfeld und damit neue Forschungsfragen. Im Rahmen dieser Dissertation wurden grundlegende Interaktionsmöglichkeiten mittels Blicksteuerung entwickelt und erforscht, die das gesamte Potential von Blickbewegungen als Eingabemöglchkeit ausnutzen. Blicksteuerung charakterisiert sich dadurch, dass sie die schnellste motorische Bewegung ist und unwillkürlich gesteuert wird. Sie bildet damit Aufmerksamkeitsprozesse ab. So kann der Blick nicht nur als Mittel zur Eingabe dienlich sein, sondern er verrät auch etwas über die Intentionen und Motive des Nutzers. Dies für die Rechnersteuerung zu nutzen, kann die Eingabe mittels Blicken überaus einfach und effizient machen und zwar nicht nur für motorisch beeinträchtigte, sondern auch für gesunde Nutzer. Diese These erforscht die Machbarkeit von mobiler Blicksteuerung. Sie untersucht im Detail den Einsatz von Pie Menüs als generisches und robustes Widget für Blicksteuerung, sowie visuelle und wahrnehmungspsychologische Aspekte bei der Nutzung von mobilen optischen Datenbrillen. Diese Arbeit fasst konventionelle blickbasierte Interaktionsmethoden zusammen und untersucht im Detail die Verwendung von Pie Menüs für Blicksteuerung. Es erforscht und diskutiert Layout-Probleme, Auswahl, Methoden und Anwendungen von Pie Menüs. Die Ergebnisse zeigen, dass Pie Menüs bis zu sechs Elemente in Breite und Tiefe, in mehreren Schichten zuordnen können, so dass eine schnelle und präzise Navigation durch die hierarchischen Ebenen gewährleistet ist. Durch die Nutzung oder die Kombination mehrerer Selektionsmethoden, kann eine effiziente und effektive Interaktion gewährleistet werden. Gestützt von diesen Ergebnissen, wurden diverse auf Pie Menüs basierte Texteingabesysteme entwickelt. Diese Texteingabesysteme bauen auf der Eingabe von Einzelbuchstaben, Bigrammen, und vorhersgesagten Wörter. Die genannten Systeme sowie zwei Selektionsmethoden der blickbasierten Interaktion wurden untersucht. Die Ergebnisse zeigen signifikante Unterschiede bei der Geschwindigkeit und Genauigkeit der Texteingabe zugunsten des auf Bigrammen basierten Texteingabesystems, im direkten Vergleich zur Methode der einzelnen Buchstabeneingabe. Die Probanden präferierten, die neue aus Sakkaden basierte Selektionsmethode, über die konventionelle und gut etablierte Schwellzeit (dwell time) Methode. Pie Menüs erwiesen sich als ein praktikabel und robustes Widget, dass die effiziente Nutzung von mobilen Eye-Tracking-Systemen und Displays, auch bei geringer Genauigkeit, ermöglichen kann. Nichts desto trotz, muss die visuelle Wahrnehmung in mobilen optischen Datenbrillen untersucht werden, um die Übertragbarkeit der bereits benannten Befunde für Datenbrillen sicher zu stellen. Ziel der AR-Ausgabegeräte ist es, die reale Umgebung mit virtueller Information anzureichern. Die Vorstellung dabei ist, dass die virtuelle Information sich in die reale Umgebung einfügt, d.h., dass Betrachter die virtuelle und reale Information zu einem Bild integrieren. Aus psychologischer Perspektive ist es einerseits plausibel, dass Informationen in raum-zeitlicher Nachbarschaft zusammenzufügen sind. Andererseits kann eine vollständige Integration nur dann erfolgen, wenn die Darbietung als einheitlich wahrgenommen werden kann. Dagegen sprechen allerdings zwei grundlegende Punkte; zum einen das Selbstleuchten der virtuellen Information, zum anderen deren Größenund Entfernungshinweise. Das Selbstleuchten der per AR-Gerät eingeblendeten Information ist deshalb problematisch, weil dieses Merkmal bei realen Objekten kaum vorhanden ist. Insofern sollte eine vollständige Integration der Information von dem AR-Gerät und anderer Information höchstens dann erfolgen können, wenn es sich bei der realen Information um Reize auf einem Computermonitor handelt, die ebenfalls selbstleuchtend sind. Für andere reale Objekte sollte die Leuchtstärke der eingeblendeten Information allein ein wesentliches Unterscheidungsmerkmal darstellen. Ein weiteres wichtiges Unterscheidungsmerkmal ist die Größeninformation, die einen bedeutenden Anteil an der Entfernungsschätzung hat: In unserer realen Welt werden Objekte, die sich vom Betrachter entfernen, auf zunehmend kleinere retinale Areale projiziert. Gleich große Objekte müssen also mit zunehmender Betrachtungsdistanz kleiner abgebildet sein. Bei der AR- Technologie werden nun Objekte, wie bei einem Nachbild, in konstanter retinaler Größe projiziert, unabhängig davon, wo in der Tiefe sie gerade lokalisiert werden. Da die Objekte üblicherweise auf bzw. vor dem nächsten Hintergrund wahrgenommen werden, sind Größeninformationen der virtuellen Objekte nicht zur Entfernungswahrnehmung zu verwenden. Sie führen sogar teilweise zu widersprüchlichen Tiefenhinweisen, wenn die Distanz zu einem Hintergrund vergrößert wird, die Reize aber die gleichen retinalen Areale beanspruchen und somit bei größer werdender Entfernung als größer wahrgenommen werden. Für diese These wurden drei Versuchsanordnungen entwickelt, mit denen jeweils bestimmte Aspekte der gleichzeitigen Wahrnehmung von Information auf einem AR-Gerät und realen Umwelt detailliert untersucht wurden. Die Ergebnisse zeigten, dass die gleichzeitige Wahrnehmung von Information aus realen und virtuellen Medien mit kosten in der Sehleistung verbunden ist. Weitere Untersuchungen zeigten, dass das visuelle System bei der gleichzeitigen Darbietung von virtuellen und realen Reizen ständig die Einstellung von Vergenz und Akkomodation ändern muss. Dies könnte die visuelle Beanspruchung erklären, die in zahlreichen Studien beobachtet wurde. Die Auswirkungen der genannten Wechselkosten, Wahrnehmungs- und technischen Faktoren für mobile blickbasierte Interaktion werden hier diskutiert und Lösungen werden vorgeschlagen

    An investigation into gaze-based interaction techniques for people with motor impairments

    Get PDF
    The use of eye movements to interact with computers offers opportunities for people with impaired motor ability to overcome the difficulties they often face using hand-held input devices. Computer games have become a major form of entertainment, and also provide opportunities for social interaction in multi-player environments. Games are also being used increasingly in education to motivate and engage young people. It is important that young people with motor impairments are able to benefit from, and enjoy, them. This thesis describes a program of research conducted over a 20-year period starting in the early 1990's that has investigated interaction techniques based on gaze position intended for use by people with motor impairments. The work investigates how to make standard software applications accessible by gaze, so that no particular modification to the application is needed. The work divides into 3 phases. In the first phase, ways of using gaze to interact with the graphical user interfaces of office applications were investigated, designed around the limitations of gaze interaction. Of these, overcoming the inherent inaccuracies of pointing by gaze at on-screen targets was particularly important. In the second phase, the focus shifted from office applications towards immersive games and on-line virtual worlds. Different means of using gaze position and patterns of eye movements, or gaze gestures, to issue commands were studied. Most of the testing and evaluation studies in this, like the first, used participants without motor-impairments. The third phase of the work then studied the applicability of the research findings thus far to groups of people with motor impairments, and in particular,the means of adapting the interaction techniques to individual abilities. In summary, the research has shown that collections of specialised gaze-based interaction techniques can be built as an effective means of completing the tasks in specific types of games and how these can be adapted to the differing abilities of individuals with motor impairments

    The development and evaluation of gaze selection techniques

    Get PDF
    Eye gaze interaction enables users to interact with computers using their eyes. A wide variety of eye gaze interaction techniques have been developed to support this type of interaction. Gaze selection techniques, a class of eye gaze interaction techniques which support target selection, are the subject of this research. Researchers developing these techniques face a number of challenges. The most significant challenge is the limited accuracy of eye tracking equipment (due to the properties of the human eye). The design of gaze selection techniques is dominated by this constraint. Despite decades of research, existing techniques are still significantly less accurate than the mouse. A recently developed technique, EyePoint, represents the state of the art in gaze selection techniques. EyePoint combines gaze input with keyboard input. Evaluation results for this technique are encouraging, but accuracy is still a concern. Early trigger errors, resulting from users triggering a selection before looking at the intended target, were found to be the most commonly occurring errors for this technique. The primary goal of this research was to improve the usability of gaze selection techniques. In order to achieve this goal, novel gaze selection techniques were developed. New techniques were developed by combining elements of existing techniques in novel ways. Seven novel gaze selection techniques were developed. Three of these techniques were selected for evaluation. A software framework was developed for implementing and evaluating gaze selection techniques. This framework was used to implement the gaze selection techniques developed during this research. Implementing and evaluating all of the techniques using a common framework ensured consistency when comparing the techniques. The novel techniques which were developed were evaluated against EyePoint and the mouse using the framework. The three novel techniques evaluated were named TargetPoint, StaggerPoint and ScanPoint. TargetPoint combines motor space expansion with a visual feedback highlight whereas the StaggerPoint and TargetPoint designs explore novel approaches to target selection disambiguation. A usability evaluation of the three novel techniques alongside EyePoint and the mouse revealed some interesting trends. TargetPoint was found to be more usable and accurate than EyePoint. This novel technique also proved more popular with test participants. One aspect of TargetPoint which proved particularly popular was the visual feedback highlight, a feature which was found to be a more effective method of combating early trigger errors than existing approaches. StaggerPoint was more efficient than EyePoint, but was less effective and satisfying. ScanPoint was the least popular technique. The benefits of providing a visual feedback highlight and test participants' positive views thereof contradict views expressed in existing research regarding the usability of visual feedback. These results have implications for the design of future gaze selection techniques. A set of design principles was developed for designing new gaze selection techniques. The designers of gaze selection techniques can benefit from these design principles by applying them to their technique

    Noise Challenges in Monomodal Gaze Interaction

    Get PDF
    Modern graphical user interfaces (GUIs) are designed with able-bodied users in mind. Operating these interfaces can be impossible for some users who are unable to control the conventional mouse and keyboard. An eye tracking system offers possibilities for independent use and improved quality of life via dedicated interface tools especially tailored to the users ’ needs (e.g., interaction, communication, e-mailing, web browsing and entertainment). Much effort has been put towards robustness, accuracy and precision of modern eyetracking systems and there are many available on the market. Even though gaze tracking technologies have undergone dramatic improvements over the past years, the systems are still very imprecise. This thesis deals with current challenges of mono-modal gaze interaction and aims at improving access to technology and interface control for users who are limited to the eyes only. Low-cost equipment in eye tracking contributes toward improved affordability but potentially at the cost of introducing more noise in the system due to the lower quality of hardware. This implies that methods of dealing with noise and creative approaches towards getting the best out of the data stream are most wanted. The work in this thesis presents three contributions that may advance the use of low-cost mono-modal gaze tracking and research in the field:- An assessment of a low-cost open-source gaze tracker and two eye tracking systems through an accuracy and precision test and a performance evaluation.- Development and evaluation of a novel innovative 3D typing system with high tolerance to noise that is based on continuous panning and zooming.- Development and evaluation of novel selection tools that compensate for noisy input during small-target selections in modern GUIs. This thesis may be of particular interest for those working on the use of eye trackers for gaze interaction and how to deal with reduced data quality. The work in this thesis is accompanied by several software applications developed for the research projects that can be freely downloaded from the eyeInteract appstore 1
    corecore