29 research outputs found

    Keeping an eye on the game: Eye gaze interaction with massively multiplayer online games and virtual communities for motor impaired users.

    Get PDF
    Online virtual communities are becoming increasingly popular both within the able-bodied and disabled user communities. These games assume the use of keyboard and mouse as standard input devices, which in some cases is not appropriate for users with a disability. This paper explores gaze-based interaction methods and highlights the problems associated with gaze control of online virtual worlds. The paper then presents a novel ‘Snap Clutch’ software tool that addresses these problems and enables gaze control. The tool is tested with an experiment showing that effective gaze control is possible although task times are longer. Errors caused by gaze control are identified and potential methods for reducing these are discussed. Finally, the paper demonstrates that gaze driven locomotion can potentially achieve parity with mouse and keyboard driven locomotion, and shows that gaze is a viable modality for game based locomotion both for able-bodied and disabled users alike

    An investigation into gaze-based interaction techniques for people with motor impairments

    Get PDF
    The use of eye movements to interact with computers offers opportunities for people with impaired motor ability to overcome the difficulties they often face using hand-held input devices. Computer games have become a major form of entertainment, and also provide opportunities for social interaction in multi-player environments. Games are also being used increasingly in education to motivate and engage young people. It is important that young people with motor impairments are able to benefit from, and enjoy, them. This thesis describes a program of research conducted over a 20-year period starting in the early 1990's that has investigated interaction techniques based on gaze position intended for use by people with motor impairments. The work investigates how to make standard software applications accessible by gaze, so that no particular modification to the application is needed. The work divides into 3 phases. In the first phase, ways of using gaze to interact with the graphical user interfaces of office applications were investigated, designed around the limitations of gaze interaction. Of these, overcoming the inherent inaccuracies of pointing by gaze at on-screen targets was particularly important. In the second phase, the focus shifted from office applications towards immersive games and on-line virtual worlds. Different means of using gaze position and patterns of eye movements, or gaze gestures, to issue commands were studied. Most of the testing and evaluation studies in this, like the first, used participants without motor-impairments. The third phase of the work then studied the applicability of the research findings thus far to groups of people with motor impairments, and in particular,the means of adapting the interaction techniques to individual abilities. In summary, the research has shown that collections of specialised gaze-based interaction techniques can be built as an effective means of completing the tasks in specific types of games and how these can be adapted to the differing abilities of individuals with motor impairments

    Human-Robot Interaction Based on Gaze Gestures for the Drone Teleoperation

    Get PDF
    Teleoperation has been widely used to perform tasks in dangerous and unreachable environments by replacing humans with controlled agents. The idea of human-robot interaction (HRI) is very important in teleoperation. Conventional HRI input devices include keyboard, mouse and joystick, etc. However, they are not suitable for handicapped users or people with disabilities. These devices also increase the mental workload of normal users due to simultaneous operation of multiple HRI input devices by hand. Hence, HRI based on gaze tracking with an eye tracker is presented in this study. The selection of objects is of great importance and occurs at a high frequency during HRI control. This paper introduces gaze gestures as an object selection strategy into HRI for drone teleoperation. In order to test and validate the performance of gaze gestures selection strategy, we evaluate objective and subjective measurements, respectively. Drone control performance, including mean task completion time and mean error rate, are the objective measurements. The subjective measurement is the analysis of participant perception. The results showed gaze gestures selection strategy has a great potential as an additional HRI for use in agent teleoperation

    Games technology: console architectures, game engines and invisible interaction

    Get PDF
    This presentation will look at three core developments in games technology. First we will look at the architectural foundations on which the consoles are built to deliver games performance. Millions of consoles are sold and the console performance is improving in parallel. Next we look at the cutting edge features available in game engines. Middleware software, namely game engines, help developers build games with rich features and also simultaneously harness the power of the game consoles to satisfy gamers. The third part focuses on Invisible Game Interaction. The Nintendo Wii games console was an instant success because of the Wiimote. Old and young alike embraced it. The Microsoft Kinect pushed the boundary even further, where the interaction device is slowly becoming invisible and the human body becomes the interface. Finally, we look at novel research developments that go beyond current game interaction devices

    Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry

    Get PDF
    In natural course, human beings usually make use of multi-sensory modalities for effective communication or efficiently executing day-to-day tasks. For instance, during verbal conversations we make use of voice, eyes, and various body gestures. Also effective human-computer interaction involves hands, eyes, and voice, if available. Therefore by combining multi-sensory modalities, we can make the whole process more natural and ensure enhanced performance even for the disabled users. Towards this end, we have developed a multi-modal human-computer interface (HCI) by combining an eye-tracker with a soft-switch which may be considered as typically representing another modality. This multi-modal HCI is applied for text entry using a virtual keyboard appropriately designed in-house, facilitating enhanced performance. Our experimental results demonstrate that using multi-modalities for text entry through the virtual keyboard is more efficient and less strenuous than single modality system and also solves the Midas-touch problem, which is inherent in an eye-tracker based HCI system where only dwell time is used for selecting a character

    User performance of gaze-based interaction with on-line virtual communities.

    Get PDF
    We present the results of an investigation into gaze-based interaction techniques with on-line virtual communities. The purpose of this study was to gain a better understanding of user performance with a gaze interaction technique developed for interacting with 3D graphical on-line communities and games. The study involved 12 participants each of whom carried out 2 equivalent sets of 3 tasks in a world created in Second Life. One set was carried out using a keystroke and mouse emulator driven by gaze, and the other set was carried out with the normal keyboard and mouse.. The study demonstrates that subjects were easily able to perform a set of tasks with eye gaze with only a minimal amount of training. It has also identified the causes of user errors and the amount of performance improvement that could be expected if the causes of these errors can be designed ou

    Gaze+Hold: Eyes-only Direct Manipulation with Continuous Gaze Modulated by Closure of One Eye

    Get PDF
    The eyes are coupled in their gaze function and therefore usually treated as a single input channel, limiting the range of interactions. However, people are able to open and close one eye while still gazing with the other. We introduce Gaze+Hold as an eyes-only technique that builds on this ability to leverage the eyes as separate input channels, with one eye modulating the state of interaction while the other provides continuous input. Gaze+Hold enables direct manipulation beyond pointing which we explore through the design of Gaze+Hold techniques for a range of user interface tasks. In a user study, we evaluated performance, usability and user’s spontaneous choice of eye for modulation of input. The results show that users are effective with Gaze+Hold. The choice of dominant versus non-dominant eye had no effect on performance, perceived usability and workload. This is significant for the utility of Gaze+Hold as it affords flexibility for mapping of either eye in different configurations
    corecore