3 research outputs found

    GEMINI: A Generic Multi-Modal Natural Interface Framework for Videogames

    Full text link
    In recent years videogame companies have recognized the role of player engagement as a major factor in user experience and enjoyment. This encouraged a greater investment in new types of game controllers such as the WiiMote, Rock Band instruments and the Kinect. However, the native software of these controllers was not originally designed to be used in other game applications. This work addresses this issue by building a middleware framework, which maps body poses or voice commands to actions in any game. This not only warrants a more natural and customized user-experience but it also defines an interoperable virtual controller. In this version of the framework, body poses and voice commands are respectively recognized through the Kinect's built-in cameras and microphones. The acquired data is then translated into the native interaction scheme in real time using a lightweight method based on spatial restrictions. The system is also prepared to use Nintendo's Wiimote as an auxiliary and unobtrusive gamepad for physically or verbally impractical commands. System validation was performed by analyzing the performance of certain tasks and examining user reports. Both confirmed this approach as a practical and alluring alternative to the game's native interaction scheme. In sum, this framework provides a game-controlling tool that is totally customizable and very flexible, thus expanding the market of game consumers.Comment: WorldCIST'13 Internacional Conferenc

    A LITERATURE STUDY ON HUMAN MOTION ANALYSIS USING DEPTH IMAGERY

    Get PDF
    Analysis of human behavior through visual information is a highly active research topic in the computer vision community. This analysis is achieved in the literature via images from the conventional cameras; however recently depth sensors are used to obtain new type of images known as depth images. This human motion analysis can be widely applied to various domains, such as security surveillance in public spaces, shopping centers and airports. Home care for elderly people and children can use live video streaming from an integrated home monitoring system to prompt timely assistance. Moreover, automatic human motion analysis can be used in Human–Computer/Robot Interaction (HCI/HRI), video retrieval, virtual reality, computer gaming and many other fields. Human motion analysis using a depth sensor is still a new research area. Most work is focused on motion capture of articulated body skeletons. However, the research community is showing interest in higher level action related research. This report explains the advantages of depth imagery and then describes the new categories of depth sensors such as Microsoft Kinect that are available to obtain depth images. High-resolution real-time depth images are cheaply available because of tools like Microsoft Kinect. The main published research on the use of depth imagery for analyzing human activity is reviewed. A growing research area is the recognition of human actions and hence the existing work focuses mainly on body part detection and pose estimation. The publicly available datasets that include depth imagery are listed in this report, and also the software libraries that are available for the depth sensors are explained.  With the development of depth sensors, an increasing number of algorithms have employed depth data in vision-based human action recognition. The increasing availability of depth sensors is broadening the scope for future research. This reports provides an overview of this emerging field followed by various vision based algorithms used for human motion analysis

    Pedestrian dead reckoning employing simultaneous activity recognition cues

    Get PDF
    Cataloged from PDF version of article.We consider the human localization problem using body-worn inertial/magnetic sensor units. Inertial sensors are characterized by a drift error caused by the integration of their rate output to obtain position information. Because of this drift, the position and orientation data obtained from inertial sensors are reliable over only short periods of time. Therefore, position updates from externally referenced sensors are essential. However, if the map of the environment is known, the activity context of the user can provide information about his position. In particular, the switches in the activity context correspond to discrete locations on the map. By performing localization simultaneously with activity recognition, we detect the activity context switches and use the corresponding position information as position updates in a localization filter. The localization filter also involves a smoother that combines the two estimates obtained by running the zero-velocity update algorithm both forward and backward in time. We performed experiments with eight subjects in indoor and outdoor environments involving walking, turning and standing activities. Using a spatial error criterion, we show that the position errors can be decreased by about 85% on the average. We also present the results of two 3D experiments performed in realistic indoor environments and demonstrate that it is possible to achieve over 90% error reduction in position by performing localization simultaneously with activity recognition
    corecore