8 research outputs found

    Selection strategies in gaze interaction

    Get PDF
    This thesis deals with selection strategies in gaze interaction, specifically for a context where gaze is the sole input modality for users with severe motor impairments. The goal has been to contribute to the subfield of assistive technology where gaze interaction is necessary for the user to achieve autonomous communication and environmental control. From a theoretical point of view research has been done on the physiology of the gaze, eye tracking technology, and a taxonomy of existing selection strategies has been developed. Empirically two overall approaches have been taken. Firstly, end-user research has been conducted through interviews and observation. The capabilities, requirements, and wants of the end-user have been explored. Secondly, several applications have been developed to explore the selection strategy of single stroke gaze gestures (SSGG) and aspects of complex gaze gestures. The main finding is that single stroke gaze gestures can successfully be used as a selection strategy. Some of the features of SSGG are: That horizontal single stroke gaze gestures are faster than vertical single stroke gaze gestures; That there is a significant difference in completion time depending on gesture length; That single stroke gaze gestures can be completed without visual feedback; That gaze tracking equipment has a significant effect on the completion times and error rates of single stroke gaze gestures; That there is not a significantly greater chance of making selection errors with single stroke gaze gestures compared with dwell selection. The overall conclusion is that the future of gaze interaction should focus on developing multi-modal interactions for mono-modal input

    Eye&Head:Synergetic Eye and Head Movement for Gaze Pointing and Selection

    Get PDF
    Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection

    Comparing Dwell Time, Pursuits and Gaze Gestures for Gaze Interaction on Handheld Mobile Devices

    Get PDF
    Gaze is promising for hands-free interaction on mobile devices. However, it is not clear how gaze interaction methods compare to each other in mobile settings. This paper presents the first experiment in a mobile setting that compares three of the most commonly used gaze interaction methods: Dwell time, Pursuits, and Gaze gestures. In our study, 24 participants selected one of 2, 4, 9, 12 and 32 targets via gaze while sitting and while walking. Results show that input using Pursuits is faster than Dwell time and Gaze gestures especially when there are many targets. Users prefer Pursuits when stationary, but prefer Dwell time when walking. While selection using Gaze gestures is more demanding and slower when there are many targets, it is suitable for contexts where accuracy is more important than speed. We conclude with guidelines for the design of gaze interaction on handheld mobile devices

    Gaze Gestures in Interaction with Problem-Solving

    Get PDF
    Tato diplomová práce byla vypracována při studijním pobytu na University of Eastern, Finland. Tato práce se zabývá analýzou pohybů očí jakožto charakteristiky lidských úmyslů. Během hraní hry 8Puzzle byly extrahovány pohyby očí a rozděleny na základě stisku tlačítka, které ve hře symbolizovalo hráčův úmysl pohnout herní kostičkou. Takto rozdělené sekvence představují reflexivní chování oka, tzv. pohledové gesto, které představují zdroj příznaků. Příznaky extrahované z pohybů očí pak popisují pohledová gesta spojená jak s úmysly a tak bez úmyslů. Nově bylo do analýzy zahrnuto také pozorování změn v zorničce jakožto zdroj informací, který by mohl pomoci v rozlišení úmyslných pohledů od pohledů bez záměru. Tento úkol zahrnuje binární klasifikaci, která byla realizována pomocí navrženého predikčního modelu s využitím SVM a RBF jádra. Tato práce se také zaměřuje na studium vlivu normalizace na celkové výsledky. Vyhodnocení modelu bylo realizování pomocí křivky pod grafem. Z výsledků bylo dobře patrné, že datová sada příznaků založená na fixacích a sakádách lépe rozlišila úmyslné sekvence od neúmyslných, zatímco úspěšnost sad příznaků postavených na odezvách pupily se pohybovala na hranici náhodného klasifikátoru. Přestože dosažené výsledky vyzývají k dalšímu studiu lidských úmyslů pomocí pohybů očí, přestože klasifikace v reálném čase na základě takto navržených příznaků by prozatím nebyla 100% spolehlivá.This MSc Thesis was performewd duding a study stay at the University of Eastern, Finland. This thesis is focused on employing analysis of eye movements as description of human intents. Eye movements a pupil dilation during problem solving game 8Puzzle were extracted and classified in parallel with performed button presses, which expressed user's desires to move A selected puzzle tile. The extracted sequences represented involuntary behaviour of the eye, so-called gaze gesture, and also a source for feature vectors. Extracted eye movement features expressed intentional and non-intentional gaze gestures. In addition, pupil dilations were employed as source of information distinguishing between desired and unwanted interaction with problem. This machine learning task consisted of binary classification using a predictive model based on support vector machines with an RBF kernel. Effect of normalization type was also examined to reveal how various approaches influence overall classification performance. The results were measured using Area under the Curve. Findings revealed significantly better performance in classification of features based on fixation and saccades while performance of pupillary responses were at the level of random classifier. However, the findings encourage for further studies of the relationship between intentional and non-intentional eye movements, even though their use in real time classification still would not be 100% reliable.

    Eye-gaze interaction techniques for use in online games and environments for users with severe physical disabilities.

    Get PDF
    Multi-User Virtual Environments (MUVEs) and Massively Multi-player On- line Games (MMOGs) are a popular, immersive genre of computer game. For some disabled users, eye-gaze offers the only input modality with the potential for sufficiently high bandwidth to support the range of time-critical interaction tasks required to play. Although, there has been much research into gaze interaction techniques for computer interaction over the past twenty years, much of this has focused on 2D desktop application control. There has been some work that investigates the use of gaze interaction as an additional input device for gaming but very little on using gaze on its own. Further, configuration of these techniques usually requires expert knowledge often beyond the capabilities of a parent, carer or support worker. The work presented in this thesis addresses these issues by the investigation of novel gaze-only interaction techniques. These are to enable at least a beginner level of game play to take place together with a means of adapting the techniques to suit an individual. To achieve this, a collection of novel gaze based interaction techniques have been evaluated through empirical studies. These have been encompassed within an extensible software architecture that has been made available for free download. Further, a metric of reliability is developed that when used as a measure within a specially designed diagnostic test, allows the interaction technique to be adapted to suit an individual. Methods of selecting interaction techniques based upon game task are also explored and a novel methodology based on expert task analysis is developed to aid selection

    Coordinated Eye and Head Movements for Gaze Interaction in 3D Environments

    Get PDF
    Gaze is attractive for interaction, as we naturally look at objects we are interested in. As a result, gaze has received significant attention within human-computer interaction as an input modality. However, gaze has been limited to only eye movements in situations where head movements are not expected to be used or as head movements in an approximation of gaze when an eye tracker is unavailable. From these observations arise an opportunity and a challenge: we propose to consider gaze as multi-modal in line with psychology and neuroscience research to more accurately represent user movements. The natural coordination of eye and head movements could then enable the development of novel interaction techniques to further the possibilities of gaze as an input modality. However, knowledge of the eye and head coordination in 3D environments and its usage for interaction design is limited. This thesis explores eye and head coordination and their potential for interaction in 3D environments by developing interaction techniques that aim to tackle established gaze-interaction issues. We study fundamental eye, head, and body movements in virtual reality during gaze shifts. From the study results, we design interaction techniques and applications that avoid the Midas touch issue, allow expressive gaze- based interaction, and handle eye tracking accuracy issues. We ground the evaluation of our interaction techniques through empirical studies. From the techniques and study results, we define three design principles for coordinated eye and head interaction from these works that distinguish between eye- only and head-supported gaze shifts, eye-head alignment as input, and distinguishing head movements for gestures and head movements that naturally occur to support gaze. We showcase new directions for gaze-based interaction and present a new way to think about gaze by taking a more comprehensive approach to gaze interaction and showing that there is more to gaze than just the eyes

    Analysis and extension of hierarchical temporal memory for multivariable time series

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, junio de 201

    Single gaze gestures

    No full text
    corecore