7 research outputs found

    Games technology: console architectures, game engines and invisible interaction

    Get PDF
    This presentation will look at three core developments in games technology. First we will look at the architectural foundations on which the consoles are built to deliver games performance. Millions of consoles are sold and the console performance is improving in parallel. Next we look at the cutting edge features available in game engines. Middleware software, namely game engines, help developers build games with rich features and also simultaneously harness the power of the game consoles to satisfy gamers. The third part focuses on Invisible Game Interaction. The Nintendo Wii games console was an instant success because of the Wiimote. Old and young alike embraced it. The Microsoft Kinect pushed the boundary even further, where the interaction device is slowly becoming invisible and the human body becomes the interface. Finally, we look at novel research developments that go beyond current game interaction devices

    Sensory System for Implementing a Human—Computer Interface Based on Electrooculography

    Get PDF
    This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI

    Eye Gaze Tracking for Human Computer Interaction

    Get PDF
    With a growing number of computer devices around us, and the increasing time we spend for interacting with such devices, we are strongly interested in finding new interaction methods which ease the use of computers or increase interaction efficiency. Eye tracking seems to be a promising technology to achieve this goal. This thesis researches interaction methods based on eye-tracking technology. After a discussion of the limitations of the eyes regarding accuracy and speed, including a general discussion on Fitts’ law, the thesis follows three different approaches on how to utilize eye tracking for computer input. The first approach researches eye gaze as pointing device in combination with a touch sensor for multimodal input and presents a method using a touch sensitive mouse. The second approach examines people’s ability to perform gestures with the eyes for computer input and the separation of gaze gestures from natural eye movements. The third approach deals with the information inherent in the movement of the eyes and its application to assist the user. The thesis presents a usability tool for recording of interaction and gaze activity. It also describes algorithms for reading detection. All approaches present results based on user studies conducted with prototypes developed for the purpose

    EyeMote – Towards Context-Aware Gaming Using Eye Movements Recorded from Wearable Electrooculography

    No full text
    Physical activity has emerged as a novel input modality for so‐called\u000A active video games. Input devices such as music instruments, dance\u000A mats or the Wii accessories allow for novel ways of interaction and\u000A a more immersive gaming experience. In this work we describe how\u000A eye movements recognised from electrooculographic (EOG) signals can\u000A be used for gaming purposes in three different scenarios. In contrast\u000A to common video‐based systems, EOG can be implemented as a wearable\u000A and light‐weight system which allows for long‐term use with unconstrained\u000A simultaneous physical activity. In a stationary computer game we\u000A show that eye gestures of varying complexity can be recognised online\u000A with equal performance to a state‐of‐the‐art video‐based system.\u000A For pervasive gaming scenarios, we show how eye movements can be\u000A recognised in the presence of signal artefacts caused by physical\u000A activity such as walking. Finally, we describe possible future context‐aware\u000A games which exploit unconscious eye movements and show which possibilities\u000A this new input modality may open up

    Proceedings of the Fifth Mediterranean Conference on Information Systems: Professional Development Consortium

    Get PDF
    Collection of position statements of doctoral students and junior faculty in the Professional Development Consortium at the the Fifth Mediterranean Conference on Information Systems, Tel Aviv - Yafo
    corecore