1,658 research outputs found

    Recognizing complex gestures via natural interfaces

    Get PDF
    Natural interfaces have revolutionized the way we interact with computers. They have provided in many fields a comfortable and efficient mechanism that requires no computer knowledge, nor artificial controlling devices, but allow as to interoperate via natural gestures. Diverse fields such as entertainment, remote control, medicine, fitness exercise are finding improvements with the introduction of this technology. However, most of these sensorial interfaces only provide support for basic gestures. In this work we show how it is possible to construct your own complex gestures using the underlying capabilities of these sensor devices.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Full-body motion-based game interaction for older adults

    Get PDF
    Older adults in nursing homes often lead sedentary lifestyles, which reduces their life expectancy. Full-body motion-control games provide an opportunity for these adults to remain active and engaged; these games are not designed with age-related impairments in mind, which prevents the games from being leveraged to increase the activity levels of older adults. In this paper, we present two studies aimed at developing game design guidelines for full-body motion controls for older adults experiencing age-related changes and impairments. Our studies also demonstrate how full-body motion-control games can accommodate a variety of user abilities, have a positive effect on mood and, by extension, the emotional well-being of older adults. Based on our studies, we present seven guidelines for the design of full-body interaction in games. The guidelines are designed to foster safe physical activity among older adults, thereby increasing their quality of life. Copyright 2012 ACM

    A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills

    Get PDF
    Fundamental movement skills (FMSs) are considered to be one of the essential phases of motor skill development. The proper development of FMSs allows children to participate in more advanced forms of movements and sports. To be able to perform an FMS correctly, children need to learn the right way of performing it. By making use of technology, a system can be developed that can help facilitate the learning of FMSs. The objective of the research was to propose an effective natural user interface (NUI) architecture for detecting FMSs using the Kinect. In order to achieve the stated objective, an investigation into FMSs and the challenges faced when teaching them was presented. An investigation into NUIs was also presented including the merits of the Kinect as the most appropriate device to be used to facilitate the detection of an FMS. An NUI architecture was proposed that uses the Kinect to facilitate the detection of an FMS. A framework was implemented from the design of the architecture. The successful implementation of the framework provides evidence that the design of the proposed architecture is feasible. An instance of the framework incorporating the jump FMS was used as a case study in the development of a prototype that detects the correct and incorrect performance of a jump. The evaluation of the prototype proved the following: - The developed prototype was effective in detecting the correct and incorrect performance of the jump FMS; and - The implemented framework was robust for the incorporation of an FMS. The successful implementation of the prototype shows that an effective NUI architecture using the Kinect can be used to facilitate the detection of FMSs. The proposed architecture provides a structured way of developing a system using the Kinect to facilitate the detection of FMSs. This allows developers to add future FMSs to the system. This dissertation therefore makes the following contributions: - An experimental design to evaluate the effectiveness of a prototype that detects FMSs - A robust framework that incorporates FMSs; and - An effective NUI architecture to facilitate the detection of fundamental movement skills using the Kinect

    RGBD Datasets: Past, Present and Future

    Full text link
    Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style

    From ‘hands up’ to ‘hands on’: harnessing the kinaesthetic potential of educational gaming

    Get PDF
    Traditional approaches to distance learning and the student learning journey have focused on closing the gap between the experience of off-campus students and their on-campus peers. While many initiatives have sought to embed a sense of community, create virtual learning environments and even build collaborative spaces for team-based assessment and presentations, they are limited by technological innovation in terms of the types of learning styles they support and develop. Mainstream gaming development – such as with the Xbox Kinect and Nintendo Wii – have a strong element of kinaesthetic learning from early attempts to simulate impact, recoil, velocity and other environmental factors to the more sophisticated movement-based games which create a sense of almost total immersion and allow untethered (in a technical sense) interaction with the games’ objects, characters and other players. Likewise, gamification of learning has become a critical focus for the engagement of learners and its commercialisation, especially through products such as the Wii Fit. As this technology matures, there are strong opportunities for universities to utilise gaming consoles to embed levels of kinaesthetic learning into the student experience – a learning style which has been largely neglected in the distance education sector. This paper will explore the potential impact of these technologies, to broadly imagine the possibilities for future innovation in higher education

    Real Time Interactive Presentation Apparatus based on Depth Image Recognition

    Get PDF
    The research on human computer interaction. Human already thinking to overcome the way of interaction towards natural interaction. Kinect is one of the tools that able to provide user with Natural User Interface (NUI). It has capability to track hand gesture and interpret their action according to the depth data stream. The human hand is tracked in point of cloud form and synchronized simultaneously.The method is started by collecting the depth image to be analyzed by random decision forest algorithm. The algorithm will choose set of thresholds and features split, then provide the information of body skeleton. In this project, hand gesture is divided into several actions such as: waiving to right or left toward head position then it will interpret as next or previous slide. The waiving is measured in degree value towards head as center point. Moreover, pushing action will trigger to open new pop up window of specific slide that contain more detailed information. The result of implementations is quite fascinating, user can control the PowerPoint and event able to design the presentation form in different ways. Furthermore, we also present a new way of presentation by presenting WPF form that connected to database for dynamic presentation tool

    Using Motion Controllers in Virtual Conferencing

    Get PDF
    At the end of 2010 Microsoft released a new controller for the Xbox 360 called Kinect. Unlike ordinary video game controllers, the Kinect works by detecting the positions and movements of a user’s entire body using the data from a sophisticated camera that is able to detect the distance between itself and each of the points on the image it is capturing. The Kinect device is essentially a low-cost, widely available motion capture system. Because of this, almost immediately many individuals put the device to use in a wide variety applications beyond video games. This thesis investigates one such use; specifically the area of virtual meetings. Virtual meetings are a means of holding a meeting between multiple individuals in multiple locations using the internet, akin to teleconferencing or video conferencing. The defining factor of virtual meetings is that they take place in a virtual world rendered with 3D graphics; with each participant in a meeting controlling a virtual representation of them self called an avatar. Previous research into virtual reality in general has shown that there is the potential for people to feel highly immersed in virtual reality, experiencing a feeling of really ‘being there’. However, previous work looking at virtual meetings has found that existing interfaces for users to interact with virtual meeting software can interfere with this experience of ‘being there’. The same research has also identified other short comings with existing virtual meeting solutions. This thesis investigates how the Kinect device can be used to overcome the limitations of exiting virtual meeting software and interfaces. It includes a detailed description of the design and development of a piece of software that was created to demonstrate the possible uses of the Kinect in this area. It also includes discussion of the results of real world testing using that software, evaluating the usefulness of the Kinect when applied to virtual meetings

    Presentation Trainer, your Public Speaking Multimodal Coach

    Get PDF
    A paper describing an experiment on the Presentation TrainerThe Presentation Trainer is a multimodal tool designed to support the practice of public speaking skills, by giving the user real-time feedback about different aspects of her nonverbal communication. It tracks the user’s voice and body to interpret her current performance. Based on this performance the Presentation Trainer selects the type of intervention that will be presented as feedback to the user. This feedback mechanism has been designed taking in consideration the results from previous studies that show how difficult it is for learners to perceive and correctly interpret real- time feedback while practicing their speeches. In this paper we present the user experience evaluation of participants who used the Presentation Trainer to practice for an elevator pitch, showing that the feedback provided by the Presentation Trainer has a significant influence on learning.The underlying research project is partly funded by the METALOGUE project. METALOGUE is a Seventh Framework Programme collabo- rative project funded by the European Commission, grant agreement number: 611073 (http://www.metalogue.eu)
    corecore