1,090 research outputs found

    Human gesture classification by brute-force machine learning for exergaming in physiotherapy

    Get PDF
    In this paper, a novel approach for human gesture classification on skeletal data is proposed for the application of exergaming in physiotherapy. Unlike existing methods, we propose to use a general classifier like Random Forests to recognize dynamic gestures. The temporal dimension is handled afterwards by majority voting in a sliding window over the consecutive predictions of the classifier. The gestures can have partially similar postures, such that the classifier will decide on the dissimilar postures. This brute-force classification strategy is permitted, because dynamic human gestures show sufficient dissimilar postures. Online continuous human gesture recognition can classify dynamic gestures in an early stage, which is a crucial advantage when controlling a game by automatic gesture recognition. Also, ground truth can be easily obtained, since all postures in a gesture get the same label, without any discretization into consecutive postures. This way, new gestures can be easily added, which is advantageous in adaptive game development. We evaluate our strategy by a leave-one-subject-out cross-validation on a self-captured stealth game gesture dataset and the publicly available Microsoft Research Cambridge-12 Kinect (MSRC-12) dataset. On the first dataset we achieve an excellent accuracy rate of 96.72%. Furthermore, we show that Random Forests perform better than Support Vector Machines. On the second dataset we achieve an accuracy rate of 98.37%, which is on average 3.57% better then existing methods

    Hardware Interfaces for VR Applications: Evaluation on Prototypes

    Get PDF
    The advancement of recent developments over the VR with the expansion of new Head Mount Displays (H.M.D.) such as Oculus Rift and Morpheus have opened new challenges in the already active research filed of the industry of Human-Computer Interaction (HCI) by exploring new means of communication with the support of the new hardware devices adjustable to body movements and hand position. The paper explores the hardware interactivity and VR H.M.D’s through two games designed to use the latest Oculus Rift SDK technology with alternative methods of hardware communication. A usability evaluation study was conducted with 18 participants and the results presented and discussed

    From ‘hands up’ to ‘hands on’: harnessing the kinaesthetic potential of educational gaming

    Get PDF
    Traditional approaches to distance learning and the student learning journey have focused on closing the gap between the experience of off-campus students and their on-campus peers. While many initiatives have sought to embed a sense of community, create virtual learning environments and even build collaborative spaces for team-based assessment and presentations, they are limited by technological innovation in terms of the types of learning styles they support and develop. Mainstream gaming development – such as with the Xbox Kinect and Nintendo Wii – have a strong element of kinaesthetic learning from early attempts to simulate impact, recoil, velocity and other environmental factors to the more sophisticated movement-based games which create a sense of almost total immersion and allow untethered (in a technical sense) interaction with the games’ objects, characters and other players. Likewise, gamification of learning has become a critical focus for the engagement of learners and its commercialisation, especially through products such as the Wii Fit. As this technology matures, there are strong opportunities for universities to utilise gaming consoles to embed levels of kinaesthetic learning into the student experience – a learning style which has been largely neglected in the distance education sector. This paper will explore the potential impact of these technologies, to broadly imagine the possibilities for future innovation in higher education

    Accessible options for deaf people in e-Learning platforms: technology solutions for sign language translation

    Get PDF
    AbstractThis paper presents a study on potential technology solutions for enhancing the communication process for deaf people on e-learning platforms through translation of Sign Language (SL). Considering SL in its global scope as a spatial-visual language not limited to gestures or hand/forearm movement, but also to other non-dexterity markers such as facial expressions, it is necessary to ascertain whether the existing technology solutions can be effective options for the SL integration on e-learning platforms. Thus, we aim to present a list of potential technology options for the recognition, translation and presentation of SL (and potential problems) through the analysis of assistive technologies, methods and techniques, and ultimately to contribute for the development of the state of the art and ensure digital inclusion of the deaf people in e-learning platforms. The analysis show that some interesting technology solutions are under research and development to be available for digital platforms in general, but yet some critical challenges must solved and an effective integration of these technologies in e-learning platforms in particular is still missing

    Semantic framework for interactive animation generation and its application in virtual shadow play performance.

    Get PDF
    Designing and creating complex and interactive animation is still a challenge in the field of virtual reality, which has to handle various aspects of functional requirements (e.g. graphics, physics, AI, multimodal inputs and outputs, and massive data assets management). In this paper, a semantic framework is proposed to model the construction of interactive animation and promote animation assets reuse in a systematic and standardized way. As its ontological implementation, two domain specific ontologies for the hand-gesture-based interaction and animation data repository have been developed in the context of Chinese traditional shadow play art. Finally, prototype of interactive Chinese shadow play performance system using deep motion sensor device is presented as the usage example

    CGAMES'2009

    Get PDF

    Interactive Virtual Reality Fitness Game using Microsoft Kinect (KineFit)

    Get PDF
    KineFit is an interactive Virtual Reality (VR) fitness training application that utilized the Kinect sensor as its main input device. Kinect offers a controller-free gaming experience where user interacts with the game environment by using their whole body, hence, making the experience more natural and immersive. The game is incorporated with a gesture-based interface that removes any use of mouse and keyboard as a means to navigate the game environment. Moreover, this project is undertaken to motivate exercise as well as to inculcate healthy living culture among Malaysians since obesity has becoming a critical issue in the country lately. In addition, the Jack of exercise is one of the primary reasons that lead to obesity, which in turn is caused by the lack of time, motivation and dull exercise routine. Furthermore, most video games are desktop-based environment that limits the users' movement, thus, promoting sedentary lifestyle. Hence, KineFit is undertaken to change the current attitude and moves towards active lifestyle. Currently, KineFit consists of two exercise modules but flexible for expansion and uses only the Kinect sensor as the input device. This project implemented the incremental model which offers flexibility in time, requirements, manpower and risks. A total of 14 users participated in 2 different prototype testing to test the usability of the system and the users' perception of the game on their motivation, engagement and enjoyment to perform the exercise. The result of the first prototype testing is encouraging as users felt motivated to complete the exercise while having fun at the same time. The users suggested that the modules should be refmed for better user engagement. The second prototype testing improved in terms of engagement as the exercise modules are changed based on data gathered on the first prototype testing. Suggested future works for expansion are by adding a jogging module and implementing an Artificial Intelligence (AI) on character that is able to interact with the user to give the game more depth, motivation, engagement and importantly, enjoyment of performing exercise at the comfort of users' living room

    Framework of controlling 3d virtual human emotional walking using BCI

    Get PDF
    A Brain-Computer Interface (BCI) is the device that can read and acquire the brain activities. A human body is controlled by Brain-Signals, which considered as a main controller. Furthermore, the human emotions and thoughts will be translated by brain through brain signals and expressed as human mood. This controlling process mainly performed through brain signals, the brain signals is a key component in electroencephalogram (EEG). Based on signal processing the features representing human mood (behavior) could be extracted with emotion as a major feature. This paper proposes a new framework in order to recognize the human inner emotions that have been conducted on the basis of EEG signals using a BCI device controller. This framework go through five steps starting by classifying the brain signal after reading it in order to obtain the emotion, then map the emotion, synchronize the animation of the 3D virtual human, test and evaluate the work. Based on our best knowledge there is no framework for controlling the 3D virtual human. As a result for implementing our framework will enhance the game field of enhancing and controlling the 3D virtual humans’ emotion walking in order to enhance and bring more realistic as well. Commercial games and Augmented Reality systems are possible beneficiaries of this technique. © 2015 Penerbit UTM Press. All rights reserved

    Hand features extractor using hand contour – a case study

    Get PDF
    Hand gesture recognition is an important topic in natural user interfaces (NUI). Hand features extraction is the first step for hand gesture recognition. This work proposes a novel real time method for hand features recognition. In our framework we use three cameras and the hand region is extracted with the background subtraction method. Features like arm angle and fingers positions are calculated using Y variations in the vertical contour image. Wrist detection is obtained by calculating the bigger distance from a base line and the hand contour, giving the main features for the hand gesture recognition. Experiments on our own data-set of about 1800 images show that our method performs well and is highly efficient
    corecore