58 research outputs found

    Gaze Path Stimulation in Retrospective Think-Aloud

    Get PDF
    For a long time, eye tracking has been thought of as a promising method for usability testing. During the last couple of years, eye tracking has finally started to live up to these expectations, at least in terms of its use in usability laboratories. We know that the user’s gaze path can reveal usability issues that would otherwise go unnoticed, but a common understanding of how best to make use of eye movement data has not been reached. Many usability practitioners seem to have intuitively started to use gaze path replays to stimulate recall for retrospective walk through of the usability test. We review the research on thinkaloud protocols in usability testing and the use of eye tracking in the context of usability evaluation. We also report our own experiment in which we compared the standard, concurrent think-aloud method with the gaze path stimulated retrospective think-aloud method. Our results suggest that the gaze path stimulated retrospective think-aloud method produces more verbal data, and that the data are more informative and of better quality as the drawbacks of concurrent think-aloud have been avoided

    Keeping an eye on the game: Eye gaze interaction with massively multiplayer online games and virtual communities for motor impaired users.

    Get PDF
    Online virtual communities are becoming increasingly popular both within the able-bodied and disabled user communities. These games assume the use of keyboard and mouse as standard input devices, which in some cases is not appropriate for users with a disability. This paper explores gaze-based interaction methods and highlights the problems associated with gaze control of online virtual worlds. The paper then presents a novel ‘Snap Clutch’ software tool that addresses these problems and enables gaze control. The tool is tested with an experiment showing that effective gaze control is possible although task times are longer. Errors caused by gaze control are identified and potential methods for reducing these are discussed. Finally, the paper demonstrates that gaze driven locomotion can potentially achieve parity with mouse and keyboard driven locomotion, and shows that gaze is a viable modality for game based locomotion both for able-bodied and disabled users alike

    User performance of gaze-based interaction with on-line virtual communities.

    Get PDF
    We present the results of an investigation into gaze-based interaction techniques with on-line virtual communities. The purpose of this study was to gain a better understanding of user performance with a gaze interaction technique developed for interacting with 3D graphical on-line communities and games. The study involved 12 participants each of whom carried out 2 equivalent sets of 3 tasks in a world created in Second Life. One set was carried out using a keystroke and mouse emulator driven by gaze, and the other set was carried out with the normal keyboard and mouse.. The study demonstrates that subjects were easily able to perform a set of tasks with eye gaze with only a minimal amount of training. It has also identified the causes of user errors and the amount of performance improvement that could be expected if the causes of these errors can be designed ou

    Framing or Gaming? Constructing a Study to Explore the Impact of Option Presentation on Consumers

    Get PDF
    The manner in which choice is framed influences individuals’ decision-making. This research examines the impact of different decision constructs on decision-making by focusing on the more problematic decision constructs: the un-selected and pre-selected optout. The study employs eye-tracking with cued retrospective think-aloud (RTA) to combine quantitative and qualitative data. Eye-tracking will determine how long a user focuses on a decision construct before taking action. Cued RTA where the user will be shown a playback of their interaction will be used to explore their attitudes towards a decision construct and identify problematic designs. This pilot begins the second of a three phase study, which ultimately aims to develop a research model containing the theoretical constructs along with hypothesized causal associations between the constructs to reveal the impact of measures such as decision construct type, default value type and question framing have on the perceived value of the website and loyalty intentions

    Gender and gaze gesture recognition for human-computer interaction

    Get PDF
    © 2016 Elsevier Inc. The identification of visual cues in facial images has been widely explored in the broad area of computer vision. However theoretical analyses are often not transformed into widespread assistive Human-Computer Interaction (HCI) systems, due to factors such as inconsistent robustness, low efficiency, large computational expense or strong dependence on complex hardware. We present a novel gender recognition algorithm, a modular eye centre localisation approach and a gaze gesture recognition method, aiming to escalate the intelligence, adaptability and interactivity of HCI systems by combining demographic data (gender) and behavioural data (gaze) to enable development of a range of real-world assistive-technology applications. The gender recognition algorithm utilises Fisher Vectors as facial features which are encoded from low-level local features in facial images. We experimented with four types of low-level features: greyscale values, Local Binary Patterns (LBP), LBP histograms and Scale Invariant Feature Transform (SIFT). The corresponding Fisher Vectors were classified using a linear Support Vector Machine. The algorithm has been tested on the FERET database, the LFW database and the FRGCv2 database, yielding 97.7%, 92.5% and 96.7% accuracy respectively. The eye centre localisation algorithm has a modular approach, following a coarse-to-fine, global-to-regional scheme and utilising isophote and gradient features. A Selective Oriented Gradient filter has been specifically designed to detect and remove strong gradients from eyebrows, eye corners and self-shadows (which sabotage most eye centre localisation methods). The trajectories of the eye centres are then defined as gaze gestures for active HCI. The eye centre localisation algorithm has been compared with 10 other state-of-the-art algorithms with similar functionality and has outperformed them in terms of accuracy while maintaining excellent real-time performance. The above methods have been employed for development of a data recovery system that can be employed for implementation of advanced assistive technology tools. The high accuracy, reliability and real-time performance achieved for attention monitoring, gaze gesture control and recovery of demographic data, can enable the advanced human-robot interaction that is needed for developing systems that can provide assistance with everyday actions, thereby improving the quality of life for the elderly and/or disabled

    Defining brain–machine interface applications by matching interface performance with device requirements

    Get PDF
    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications. © 2007 Elsevier B.V. All rights reserved

    Eyes in Attentive Interfaces: Experiences from Creating iDict, a Gaze-Aware Reading Aid

    Get PDF
    Kun keskustelet toisen ihmisen kanssa, mihin asioihin puheen lisäksi kiinnität huomiosi? Paitsi keskustelukumppanin spontaanit eleet ja ilmeet, myös hänen katseensa käyttäytyminen auttaa tulkitsemaan välitettyä viestiä. Tietokoneen käyttöliittymissä kohteiden suora manipulointi hiiren avulla oli aikoinaan iso askel kohti helppokäyttöisempää käyttöliittymää, mutta luontaisia viestintäkeinojamme olisi nykyisin mahdollista hyödyntää myös tietokoneen ja ihmisen välisessä kommunikaatiossa huomattavasti monipuolisemmin. Jos tietokone olisi tietoinen käyttäjän tarkkaavaisuuden kulloisestakin kohteesta, voisimme kehittää paremmin käyttäjää ymmärtäviä ja käyttäjän toimiin mukautuvia sovelluksia. Yleisimmin käytetty tekniikka katseen kohteen seuraamiseksi perustuu silmästä otetun videokuvan analysointiin. Reaaliaikaista katseen kohteen seurantaa hyödynnetään toistaiseksi hyvin vähän, lähinnä vammautuneille suunnatuissa sovelluksissa. Tietyissä tilanteissa katse saattaa olla jopa ainoa kommunikointitapa: katseen avulla on kirjoitettu jopa kokonaisia kirjoja. Katseen seurannan avulla voidaan kuitenkin parantaa myös yleiskäyttöisempiä käyttöliittymiä. Annetun komennon kohde voidaan esimerkiksi tulkita suoraan katseen perusteella joutumatta osoittamaan sitä hiirellä, tai ohjelma saattaa pyrkiä suuntaamaan käyttäjän huomiota, jos havaitaan että käyttäjältä jää jokin tietyssä tilanteessa olennainen tieto huomaamatta. Tampereen yliopistossa tarkastettavassa väitöstyössä keskitytään tarkastelemaan, miten tietoa käyttäjän katseen kohteesta voidaan hyödyntää ihmisen ja tietokoneen välisessä käyttöliittymässä. Aiheeseen paneudutaan esimerkkisovelluksen, iDictin, kautta. iDict avustaa vieraskielisten dokumenttien lukijaa seuraamalla lukijan katsetta tekstiä luettaessa ja antamalla hänelle automaattisesti apua kun oletetaan siihen olevan tarvetta. Työtä voidaan pitää osoituksena siitä, että katseenseurannan avulla kaikille tarkoitetuista sovelluksista on mahdollista tehdä tehokkaampia ja käytettävyydeltään miellyttävämpiä. Ihmisen näköjärjestelmän fysiologisista ominaisuuksista johtuen katseen kohteen mittaustarkkuus on rajoitettu. Työssä todistetaan mm., että tekstiä luettaessa katseen kohteen virhemäärityksiä on mahdollista kompensoida algoritmisesti. Käyttäjätestit osoittivat, että konsepti on hyvinkin kehityskelpoinen. Yli puolet testihenkilöistä olisi olleet valmiita ottamaan käyttöön katsesovelluksen mieluummin kuin hiirellä käytettävän sovelluksen. Näin siitä huolimatta, että hiiren käyttö oli kaikille tuttua ja katseen seurantatietojen käyttäminen ohjelman "syötteenä" entuudestaan tuntematonta.The mouse and keyboard currently serve as the predominant means of passing information from user to computer. Direct manipulation of objects via the mouse was a breakthrough in the design of more natural and intuitive user interfaces for computers. However, in real life we have a rich set of communication methods at our disposal; when interacting with others, we, for example, interpret their gestures, expressions, and eye movements. This information can be used also when moving human-computer interaction toward the more natural and effective. In particular, the focus of the user s attention could often be a valuable source of information. The focus of this work is on examining the benefits and limitations in using the information acquired from a user s eye movements in the human­computer interface. For this purpose, we developed an example application, iDict. The application assists the reader of an electronic document written in a foreign language by tracking the reader s eye movements and providing assistance automatically when the reader seems to be in need of help. The dissertation is divided into three parts. The first part presents the physiological and psychological basics behind the measurement of eye movements, and we also provide a survey of both the applications that make use of eye tracking and the relevant research into eye movements during reading. The second section introduces the iDict application, from both the user s and the implementer s point of view. Finally, the work presents the experiments that were performed either to inform design decisions or to test the performance of the application. This work is proof that gaze-aware applications can be more pleasing and effective than traditional application interfaces. The human visual system imposes limits on the accuracy of eye tracking, which is why we, for example, are unable to narrow down with certainty the reader s focus of gaze to a target word. This work demonstrates, however, that errors in interpreting the focus of visual attention can be algorithmically compensated. Additionally, we conclude that the total time spent on a word is a reasonably good indicator in judging comprehension difficulties. User tests with iDict were encouraging. More than half of the users preferred using eye movements to the option of using the application traditionally with the mouse. The result was obtained even when the test users were familiar with using a mouse but not with the concept of the eye as an input device
    corecore