5 research outputs found

    Using a serious game to assess spatial memory in children and adults

    Get PDF
    Short-term spatial memory has traditionally been assessed using visual stimuli, but not auditory stimuli. In this paper, we design and test a serious game with auditory stimuli for assessing short-term spatial memory. The interaction is achieved by gestures (by raising your arms). The auditory stimuli are emitted by smart devices placed at different locations. A total of 70 participants (32 children and 38 adults) took part in the study. The outcomes obtained with our game were compared with traditional methods. The results indicated that the outcomes in the game for the adults were significantly greater than those obtained by the children. This result is consistent with the assumption that the ability of humans increases continuously during maturation. Correlations were found between our game and traditional methods, suggesting its validity for assessing spatial memory. The results indicate that both groups easily learn how to perform the task and are good at recalling the locations of sounds emitted from different positions. With regard to satisfaction with our game, the mean scores of the children were higher for nearly all of the questions. The mean scores for all of the questions, except one, were greater than 4 on a scale from 1 to 5. These results show the satisfaction of the participants with our game. The results suggest that our game promotes engagement and allows the assessment of spatial memory in an ecological way

    3D Hand gesture recognition using a ZCam and an SVM-SMO classifier

    Get PDF
    The increasing number of new and complex computer-based applications has generated a need for a more natural interface between human users and computer-based applications. This problem can be solved by using hand gestures, one of the most natural means of communication between human beings. The difficulty in deploying a computer vision-based gesture application in a non-controlled environment can be solved by using new hardware which can capture 3D information. However, researchers and others still need complete solutions to perform reliable gesture recognition in such an environment. This paper presents a complete solution for the one-hand 3D gesture recognition problem, implements a solution, and proves its reliability. The solution is complete because it focuses both on the 3D gesture recognition and on understanding the scene being presented (so the user does not need to inform the system that he or she is about to initiate a new gesture). The selected approach models the gestures as a sequence of hand poses. This reduces the problem to one of recognizing the series of hand poses and building the gestures from this information. Additionally, the need to perform the gesture recognition in real time resulted in using a simple feature set that makes the required processing as streamlined as possible. Finally, the hand gesture recognition system proposed here was successfully implemented in two applications, one developed by a completely independent team and one developed as part of this research. The latter effort resulted in a device driver that adds 3D gestures to an open-source, platform-independent multi-touch framework called Sparsh-U

    Hand gesture modelling and recognition involving changing shapes and trajectories, using a Predictive EigenTracker

    No full text
    We present a novel eigenspace-based framework to model a dynamic hand gesture that incorporates both hand shape as well as trajectory information. We address the problem of choosing a gesture set that models an upper bound on gesture recognition efficiency. We show encouraging experimental results on a such a representative set. (c) 2006 Elsevier B.V. All rights reserved

    Computational Models for the Automatic Learning and Recognition of Irish Sign Language

    Get PDF
    This thesis presents a framework for the automatic recognition of Sign Language sentences. In previous sign language recognition works, the issues of; user independent recognition, movement epenthesis modeling and automatic or weakly supervised training have not been fully addressed in a single recognition framework. This work presents three main contributions in order to address these issues. The first contribution is a technique for user independent hand posture recognition. We present a novel eigenspace Size Function feature which is implemented to perform user independent recognition of sign language hand postures. The second contribution is a framework for the classification and spotting of spatiotemporal gestures which appear in sign language. We propose a Gesture Threshold Hidden Markov Model (GT-HMM) to classify gestures and to identify movement epenthesis without the need for explicit epenthesis training. The third contribution is a framework to train the hand posture and spatiotemporal models using only the weak supervision of sign language videos and their corresponding text translations. This is achieved through our proposed Multiple Instance Learning Density Matrix algorithm which automatically extracts isolated signs from full sentences using the weak and noisy supervision of text translations. The automatically extracted isolated samples are then utilised to train our spatiotemporal gesture and hand posture classifiers. The work we present in this thesis is an important and significant contribution to the area of natural sign language recognition as we propose a robust framework for training a recognition system without the need for manual labeling
    corecore