20 research outputs found

    On object selection in gaze controlled environments

    Get PDF
    In the past twenty years, gaze control has become a reliable alternative input method not only for handicapped users. The selection of objects, however, which is of highest importance and of highest frequency in computer control, requires explicit control not inherent in eye movements. Objects have been therefore usually selected via prolonged fixations (dwell times). Dwell times seemed to be for many years the unique reliable method for selection. In this paper, we review pros and cons of classical selection methods and novel metaphors, which are based on pies and gestures. The focus is on the effectiveness and efficiency of selections. In order to estimate the potential of current suggestions for selection, a basic empirical comparison is recommended

    High-Accuracy Gaze Estimation for Interpolation-Based Eye-Tracking Methods

    Get PDF
    This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. Our experiments show that the eye-camera location combined with the non-coplanarity of the eye plane deforms the eye feature distribution when the eye-camera is far from the eye’s optical axis. This paper proposes geometric transformation methods to reshape the eye feature distribution based on the virtual alignment of the eye-camera in the center of the eye’s optical axis. The data analysis uses eye-tracking data from a simulated environment and an experiment with 83 volunteer participants (55 males and 28 females). We evaluate the improvements achieved with the proposed methods using Gaussian analysis, which defines a range for high-accuracy gaze estimation between −0.5∘ and 0.5∘. Compared to traditional polynomial-based and homography-based gaze estimation methods, the proposed methods increase the number of gaze estimations in the high-accuracy range

    Idiosyncratic Feature-Based Gaze Mapping

    Get PDF
    It is argued that polynomial expressions that are normally used for remote, video-based, low cost eye tracking systems, are not always ideal to accommodate individual differences in eye cleft, position of the eye in the socket, corneal bulge, astigmatism, etc. A procedure to identify a set of polynomial expressions that will provide the best possible accuracy for a specific individual is proposed.  It is also proposed that regression coefficients are recalculated in real-time, based on a subset of calibration points in the region of the current gaze and that a real-time correction is applied, based on the offsets from calibration targets that are close to the estimated point of regard.It was found that if no correction is applied, the choice of polynomial is critically important to get an accuracy that is just acceptable.  Previously identified polynomial sets were confirmed to provide good results in the absence of any correction procedure.  By applying real-time correction, the accuracy of any given polynomial improves while the choice of polynomial becomes less critical.  Identification of the best polynomial set per participant and correction technique in combination with the aforementioned correction techniques, lead to an average error of 0.32° (sd = 0.10°) over 134 participant recordings.The proposed improvements could lead to low-cost systems that are accurate and fast enough to do reading research or other studies where high accuracy is expected at framerates in excess of 200 Hz

    Radi-Eye:Hands-Free Radial Interfaces for 3D Interaction using Gaze-Activated Head-Crossing

    Get PDF
    Eye gaze and head movement are attractive for hands-free 3D interaction in head-mounted displays, but existing interfaces afford only limited control. Radi-Eye is a novel pop-up radial interface designed to maximise expressiveness with input from only the eyes and head. Radi-Eye provides widgets for discrete and continuous input and scales to support larger feature sets. Widgets can be selected with Look & Cross, using gaze for pre-selection followed by head-crossing as trigger and for manipulation. The technique leverages natural eye-head coordination where eye and head move at an offset unless explicitly brought into alignment, enabling interaction without risk of unintended input. We explore Radi-Eye in three augmented and virtual reality applications, and evaluate the effect of radial interface scale and orientation on performance with Look & Cross. The results show that Radi-Eye provides users with fast and accurate input while opening up a new design space for hands-free fluid interaction

    An investigation into gaze-based interaction techniques for people with motor impairments

    Get PDF
    The use of eye movements to interact with computers offers opportunities for people with impaired motor ability to overcome the difficulties they often face using hand-held input devices. Computer games have become a major form of entertainment, and also provide opportunities for social interaction in multi-player environments. Games are also being used increasingly in education to motivate and engage young people. It is important that young people with motor impairments are able to benefit from, and enjoy, them. This thesis describes a program of research conducted over a 20-year period starting in the early 1990's that has investigated interaction techniques based on gaze position intended for use by people with motor impairments. The work investigates how to make standard software applications accessible by gaze, so that no particular modification to the application is needed. The work divides into 3 phases. In the first phase, ways of using gaze to interact with the graphical user interfaces of office applications were investigated, designed around the limitations of gaze interaction. Of these, overcoming the inherent inaccuracies of pointing by gaze at on-screen targets was particularly important. In the second phase, the focus shifted from office applications towards immersive games and on-line virtual worlds. Different means of using gaze position and patterns of eye movements, or gaze gestures, to issue commands were studied. Most of the testing and evaluation studies in this, like the first, used participants without motor-impairments. The third phase of the work then studied the applicability of the research findings thus far to groups of people with motor impairments, and in particular,the means of adapting the interaction techniques to individual abilities. In summary, the research has shown that collections of specialised gaze-based interaction techniques can be built as an effective means of completing the tasks in specific types of games and how these can be adapted to the differing abilities of individuals with motor impairments

    Gaze Controlled Applications and Optical-See-Through Displays - General Conditions for Gaze Driven Companion Technologies

    Get PDF
    Gaze based human-computer-interaction has been a research topic for over a quarter century. Since then, the main scenario for gaze interaction has been helping handicapped people to communicate an interact with their environment. With the rapid development of mobile and wearable display technologies, a new application field for gaze interaction has appeared, opening new research questions. This thesis investigates the feasibility of mobile gaze based interaction, studying deeply the use of pie menus as a generic and robust widget for gaze interaction as well as visual and perceptual issues on head mounted (wearable) optical see-through displays. It reviews conventional gaze-based selection methods and investigates in detail the use of pie menus for gaze control. It studies and discusses layout issues, selection methods and applications. Results show that pie menus can allocate up to six items in width and multiple depth layers, allowing a fast and accurate navigation through hierarchical levels by using or combining multiple selection methods. Based on these results, several text entry methods based on pie menus are proposed. Character-by-character text entry, text entry with bigrams and with text entry with bigrams derived by word prediction, as well as possible selection methods, are examined in a longitudinal study. Data showed large advantages of the bigram entry methods over single character text entry in speed and accuracy. Participants preferred the novel selection method based on saccades (selecting by borders) over the conventional and well established dwell time method. On the one hand, pie menus showed to be a feasible and robust widget, which may enable the efficient use of mobile eye tracking systems that may not be accurate enough for controlling elements on conventional interface. On the other hand, visual perception on mobile displays technologies need to be examined in order to deduce if the mentioned results can be transported to mobile devices. Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load on the respective systems. We investigated visual performance on participants with a visual search tasks and dual tasks presenting visual stimuli on the optical see-through device, only on a computer screen, and simultaneously on both devices. Results showed that switching between the presentation devices (i.e. perceiving information simultaneously from both devices) produced costs in visual performance. The implications of these costs and of further perceptual and technical factors for mobile gaze-based interaction are discussed and solutions are proposed.Blickbasierte Mensch-Computer-Interaktion ist seit einem viertel Jahrhundert ein relevantes Forschungsthema. Der überwiegende Einsatz von Blicksteuerung beschränkte sich darauf, das Menschen mit Behinderungen kommunizieren können. In dieser Form, können z.B. ALS Patienten allein durch ihren Blickbewegungen Texte schreiben, Rollstühle bewegen und Bedürfnisse mitteilen. Durch die rasante Entwicklung von mobilen Endgeräten und tragbaren Displaytechnologien, öffnete sich ein neues Anwendungsfeld und damit neue Forschungsfragen. Im Rahmen dieser Dissertation wurden grundlegende Interaktionsmöglichkeiten mittels Blicksteuerung entwickelt und erforscht, die das gesamte Potential von Blickbewegungen als Eingabemöglchkeit ausnutzen. Blicksteuerung charakterisiert sich dadurch, dass sie die schnellste motorische Bewegung ist und unwillkürlich gesteuert wird. Sie bildet damit Aufmerksamkeitsprozesse ab. So kann der Blick nicht nur als Mittel zur Eingabe dienlich sein, sondern er verrät auch etwas über die Intentionen und Motive des Nutzers. Dies für die Rechnersteuerung zu nutzen, kann die Eingabe mittels Blicken überaus einfach und effizient machen und zwar nicht nur für motorisch beeinträchtigte, sondern auch für gesunde Nutzer. Diese These erforscht die Machbarkeit von mobiler Blicksteuerung. Sie untersucht im Detail den Einsatz von Pie Menüs als generisches und robustes Widget für Blicksteuerung, sowie visuelle und wahrnehmungspsychologische Aspekte bei der Nutzung von mobilen optischen Datenbrillen. Diese Arbeit fasst konventionelle blickbasierte Interaktionsmethoden zusammen und untersucht im Detail die Verwendung von Pie Menüs für Blicksteuerung. Es erforscht und diskutiert Layout-Probleme, Auswahl, Methoden und Anwendungen von Pie Menüs. Die Ergebnisse zeigen, dass Pie Menüs bis zu sechs Elemente in Breite und Tiefe, in mehreren Schichten zuordnen können, so dass eine schnelle und präzise Navigation durch die hierarchischen Ebenen gewährleistet ist. Durch die Nutzung oder die Kombination mehrerer Selektionsmethoden, kann eine effiziente und effektive Interaktion gewährleistet werden. Gestützt von diesen Ergebnissen, wurden diverse auf Pie Menüs basierte Texteingabesysteme entwickelt. Diese Texteingabesysteme bauen auf der Eingabe von Einzelbuchstaben, Bigrammen, und vorhersgesagten Wörter. Die genannten Systeme sowie zwei Selektionsmethoden der blickbasierten Interaktion wurden untersucht. Die Ergebnisse zeigen signifikante Unterschiede bei der Geschwindigkeit und Genauigkeit der Texteingabe zugunsten des auf Bigrammen basierten Texteingabesystems, im direkten Vergleich zur Methode der einzelnen Buchstabeneingabe. Die Probanden präferierten, die neue aus Sakkaden basierte Selektionsmethode, über die konventionelle und gut etablierte Schwellzeit (dwell time) Methode. Pie Menüs erwiesen sich als ein praktikabel und robustes Widget, dass die effiziente Nutzung von mobilen Eye-Tracking-Systemen und Displays, auch bei geringer Genauigkeit, ermöglichen kann. Nichts desto trotz, muss die visuelle Wahrnehmung in mobilen optischen Datenbrillen untersucht werden, um die Übertragbarkeit der bereits benannten Befunde für Datenbrillen sicher zu stellen. Ziel der AR-Ausgabegeräte ist es, die reale Umgebung mit virtueller Information anzureichern. Die Vorstellung dabei ist, dass die virtuelle Information sich in die reale Umgebung einfügt, d.h., dass Betrachter die virtuelle und reale Information zu einem Bild integrieren. Aus psychologischer Perspektive ist es einerseits plausibel, dass Informationen in raum-zeitlicher Nachbarschaft zusammenzufügen sind. Andererseits kann eine vollständige Integration nur dann erfolgen, wenn die Darbietung als einheitlich wahrgenommen werden kann. Dagegen sprechen allerdings zwei grundlegende Punkte; zum einen das Selbstleuchten der virtuellen Information, zum anderen deren Größenund Entfernungshinweise. Das Selbstleuchten der per AR-Gerät eingeblendeten Information ist deshalb problematisch, weil dieses Merkmal bei realen Objekten kaum vorhanden ist. Insofern sollte eine vollständige Integration der Information von dem AR-Gerät und anderer Information höchstens dann erfolgen können, wenn es sich bei der realen Information um Reize auf einem Computermonitor handelt, die ebenfalls selbstleuchtend sind. Für andere reale Objekte sollte die Leuchtstärke der eingeblendeten Information allein ein wesentliches Unterscheidungsmerkmal darstellen. Ein weiteres wichtiges Unterscheidungsmerkmal ist die Größeninformation, die einen bedeutenden Anteil an der Entfernungsschätzung hat: In unserer realen Welt werden Objekte, die sich vom Betrachter entfernen, auf zunehmend kleinere retinale Areale projiziert. Gleich große Objekte müssen also mit zunehmender Betrachtungsdistanz kleiner abgebildet sein. Bei der AR- Technologie werden nun Objekte, wie bei einem Nachbild, in konstanter retinaler Größe projiziert, unabhängig davon, wo in der Tiefe sie gerade lokalisiert werden. Da die Objekte üblicherweise auf bzw. vor dem nächsten Hintergrund wahrgenommen werden, sind Größeninformationen der virtuellen Objekte nicht zur Entfernungswahrnehmung zu verwenden. Sie führen sogar teilweise zu widersprüchlichen Tiefenhinweisen, wenn die Distanz zu einem Hintergrund vergrößert wird, die Reize aber die gleichen retinalen Areale beanspruchen und somit bei größer werdender Entfernung als größer wahrgenommen werden. Für diese These wurden drei Versuchsanordnungen entwickelt, mit denen jeweils bestimmte Aspekte der gleichzeitigen Wahrnehmung von Information auf einem AR-Gerät und realen Umwelt detailliert untersucht wurden. Die Ergebnisse zeigten, dass die gleichzeitige Wahrnehmung von Information aus realen und virtuellen Medien mit kosten in der Sehleistung verbunden ist. Weitere Untersuchungen zeigten, dass das visuelle System bei der gleichzeitigen Darbietung von virtuellen und realen Reizen ständig die Einstellung von Vergenz und Akkomodation ändern muss. Dies könnte die visuelle Beanspruchung erklären, die in zahlreichen Studien beobachtet wurde. Die Auswirkungen der genannten Wechselkosten, Wahrnehmungs- und technischen Faktoren für mobile blickbasierte Interaktion werden hier diskutiert und Lösungen werden vorgeschlagen

    A Preliminary Review of Eye Tracking Research in Interpreting Studies: Retrospect and Prospects

    Get PDF
    The field of Interpreting Studies (IS) has witnessed an exponential increase in the development of new data-gathering techniques aimed at investigating some of the underlying cognitive and psychological processes.The present article provides a preliminary look into research studies applying eye tracking technology in the field of IS over the past few decades. The present study also aims at exploring the theoretical basis for different applications of eye tracking equipment in the investigation of the cognitive processes underlying interpreting by analyzing empirical research studies related to cognitive aspects of translation. The sampled studies are analyzed in terms of the contribution they provide for the joint development of eye tracking research in IS, in terms of the methodology used and the way data are processed and presented.Finally, the present article concludes with a discussion on future research focusing on possible developments and applications of eye tracking to authentic interpreting situational contexts. The final section presents new challenges and opportunities for unexplored applications of eye tracking in the field of IS. It is argued that interdisciplinary approaches can show the full range of possibilities of eye tracking research in the field of IS

    Evaluation of tactile feedback on dwell time progression in eye typing

    Get PDF
    Haptic feedback is known to be important in manual interfaces. However, gaze-based interactive systems usually do not involve haptic feedback. In this thesis, I investigated whether an eye typing system, which uses an eye tracker as an input device, can benefit from tactile feedback as indication of dwell time progression. The dwell time is an effective selection method in eye typing systems. It means that the user keep her/his gaze on a certain element for predetermined amount of time to active it. The tactile feedback was given by a vibrotactile actuator to the participant's finger that rested on top of the actuator. This thesis reports a comparison of three different tactile feedbacks: "Ascending" feedback, "Warning" feedback and "No dwell" feedback (i.e. no feedback given for dwell), for the dwell time progression during eye typing process. The feedbacks were compared in a within-participants experiment where each participant used the eye typing system with all feedbacks in a counterbalanced order. Two sessions were conducted to observe learning effects. The comparison methods consisted of quantitative and qualitative measures. The quantitative data included text entry speed in words per minute (WPM), error rate, keystrokes per character (KSPC), read text events (RTE) and re-focus events (RFE). RTE referred to the events when the participant moved the gaze to the text input field and RFE took place because the participant moved her/his gaze away from the key too early, thus requiring a re-focus on the same key. The qualitative data were collected from the participants' answers to questionnaires. The quantitative results reflected a learning effect between the two sessions in all the three conditions. KSPC indicated a statistically significant difference between the feedback conditions. "No dwell" feedback was related to lower KSPC than "Ascending" feedback, indicating that "Ascending" feedback led to more extra effort by the participants. The result of qualitative data did not indicate any statistically significant difference among the feedbacks and between the sessions. However, more research with different types of haptic actuators is required to validate the results
    corecore