533 research outputs found

    Teaching Introductory Programming Concepts through a Gesture-Based Interface

    Get PDF
    Computer programming is an integral part of a technology driven society, so there is a tremendous need to teach programming to a wider audience. One of the challenges in meeting this demand for programmers is that most traditional computer programming classes are targeted to university/college students with strong math backgrounds. To expand the computer programming workforce, we need to encourage a wider range of students to learn about programming. The goal of this research is to design and implement a gesture-driven interface to teach computer programming to young and non-traditional students. We designed our user interface based on the feedback from students attending the College of Engineering summer camps at the University of Arkansas. Our system uses the Microsoft Xbox Kinect to capture the movements of new programmers as they use our system. Our software then tracks and interprets student hand movements in order to recognize specific gestures which correspond to different programming constructs, and uses this information to create and execute programs using the Google Blockly visual programming framework. We focus on various gesture recognition algorithms to interpret user data as specific gestures, including template matching, sector quantization, and supervised machine learning clustering algorithms

    A kinect game in the VirtualSign project: training and learning with gestures

    Get PDF
    Comunicação apresentada na 7ª Conferência Internacional de Arte Digital realizada em Óbidos de 19-20 de março de 2015This paper presents the development of a game aimed at making the process of learning sign language enjoyable and interactive, using the VirtualSign Translator. In this game the player controls a character that interacts with various objects and non-player characters with the aim of collecting several gestures from the Portuguese Sign Language. Through the connection with VirtualSign Translator the data gloves and Kinect support this interaction and the character can then represent the gestures. This allows for the user to visualize and learn or train the various existing configurations of gestures. To improve the interactivity and to make the game more interesting and motivating, several checkpoints were placed along game levels. This provides the players with a chance to test the knowledge they have acquired so far on the checkpoints, after performing the signs using Kinect. A High Scores system was also created, as well as a History option, to ensure that the game is a continuous and motivating learning process

    Natural User Interfaces for Virtual Character Full Body and Facial Animation in Immersive Virtual Worlds

    Get PDF
    In recent years, networked virtual environments have steadily grown to become a frontier in social computing. Such virtual cyberspaces are usually accessed by multiple users through their 3D avatars. Recent scientific activity has resulted in the release of both hardware and software components that enable users at home to interact with their virtual persona through natural body and facial activity performance. Based on 3D computer graphics methods and vision-based motion tracking algorithms, these techniques aspire to reinforce the sense of autonomy and telepresence within the virtual world. In this paper we present two distinct frameworks for avatar animation through user natural motion input. We specifically target the full body avatar control case using a Kinect sensor via a simple, networked skeletal joint retargeting pipeline, as well as an intuitive user facial animation 3D reconstruction pipeline for rendering highly realistic user facial puppets. Furthermore, we present a common networked architecture to enable multiple remote clients to capture and render any number of 3D animated characters within a shared virtual environment

    Using games to make the process of learning sign language enjoyable and interactive

    Get PDF
    Conferência realizada em Wellington, na Nova Zelândia, de 8-10 de dezembro de 2014The work presented in this paper consists in the development of a game to make the process of learning sign language enjoyable and interactive. In this game the player controls a character that interacts with various objects and non-player characters with the aim of collecting several gestures from the Portuguese Sign Language. This interaction is supported by data gloves and Kinect. These gestures can then be represented by the character. This allows the user to visualize and learn or train the various existing gestures. To improve the interactivity and to make the game more interesting and motivating, several checkpoints were placed along game levels. This will provide the players a chance to test the knowledge they have acquired so far on the checkpoints by performing the signs using Kinect. A High Scores system was also created as well as a history to ensure that the game is a continuous motivating process as well as a learning process

    Developing a Workflow for Cross-platform 3D Apps using Game Engines

    Get PDF
    Cross-platform developing is not a new approach. However, considering it is common for developers to release an application exclusively for a single platform, the use of cross-platform developing is noticeably low. In this master’s thesis, aspects regarding the development of a cross-platform 3D application is examined and discussed. A comparision between different motion capture systems used for character animations is presented along with a pipeline for the process of creating a character to be used in a mobile application. This thesis also provides guidelines and recommendations for independent game developers.Multiplatformsutveckling är inte en ny strategi. Men med tanke på hur vanligt förekommande det är för utvecklare att släppa en applikation för endast en platform så kan man konstatera att användningen av multiplatformsutveckling är förvånansvärt liten. I detta examensarbete diskuteras och analyseras aspekter som rör utveckling av en multiplatformsbaserad 3D-applikation. En jämförelse av olika motion capture-system för framtagande av karaktärsanimationer kommer att presenteras tillsammans med ett arbetssätt för skapande av en karaktär för användning i en mobil applikation. Detta examensarbete presenterar även riktlinjer och rekommendationer riktade till oberoende spelutvecklare

    Real-time gaze estimation using a Kinect and a HD webcam

    Get PDF
    In human-computer interaction, gaze orientation is an important and promising source of information to demonstrate the attention and focus of users. Gaze detection can also be an extremely useful metric for analysing human mood and affect. Furthermore, gaze can be used as an input method for human-computer interaction. However, currently real-time and accurate gaze estimation is still an open problem. In this paper, we propose a simple and novel estimation model of the real-time gaze direction of a user on a computer screen. This method utilises cheap capturing devices, a HD webcam and a Microsoft Kinect. We consider that the gaze motion from a user facing forwards is composed of the local gaze motion shifted by eye motion and the global gaze motion driven by face motion. We validate our proposed model of gaze estimation and provide experimental evaluation of the reliability and the precision of the method

    Game design and the gamification of content : assessing a project for learning sign language

    Get PDF
    Comunicação apresentada na EDULEARN 2015, realizada em Barcelona de 6-8 de julho de 2015This paper discusses the concepts of game design and gamification of content, based on the development of a serious game aimed at making the process of learning sign language enjoyable and interactive. In this game the player controls a character that interacts with various objects and non- player characters, with the aim of collecting several gestures from the Portuguese Sign Language corpus. The learning model used pushes forward the concept of gamification as a learning process valued by students and teachers alike, and illustrates how it may be used as a personalized device for amplifying learning. Our goal is to provide a new methodology to involve students and general public in learning specific subjects using a ludic, participatory and interactive approach supported by ICT- based tools. Thus, in this paper we argue that perhaps some education processes could be improved by adding the gaming factor through technologies that are able to involve students in a way that is more physical (e.g. using Kinect and sensor gloves), so learning becomes more intense and memorable

    Motion capture based on RGBD data from multiple sensors for avatar animation

    Get PDF
    With recent advances in technology and emergence of affordable RGB-D sensors for a wider range of users, markerless motion capture has become an active field of research both in computer vision and computer graphics. In this thesis, we designed a POC (Proof of Concept) for a new tool that enables us to perform motion capture by using a variable number of commodity RGB-D sensors of different brands and technical specifications on constraint-less layout environments. The main goal of this work is to provide a tool with motion capture capabilities by using a handful of RGB-D sensors, without imposing strong requirements in terms of lighting, background or extension of the motion capture area. Of course, the number of RGB-D sensors needed is inversely proportional to their resolution, and directly proportional to the size of the area to track to. Built on top of the OpenNI 2 library, we made this POC compatible with most of the nonhigh-end RGB-D sensors currently available in the market. Due to the lack of resources on a single computer, in order to support more than a couple of sensors working simultaneously, we need a setup composed of multiple computers. In order to keep data coherency and synchronization across sensors and computers, our tool makes use of a semi-automatic calibration method and a message-oriented network protocol. From color and depth data given by a sensor, we can also obtain a 3D pointcloud representation of the environment. By combining pointclouds from multiple sensors, we can collect a complete and animated 3D pointcloud that can be visualized from any viewpoint. Given a 3D avatar model and its corresponding attached skeleton, we can use an iterative optimization method (e.g. Simplex) to find a fit between each pointcloud frame and a skeleton configuration, resulting in 3D avatar animation when using such skeleton configurations as key frames

    MoveBox: Democratizing MoCap for the Microsoft Rocketbox Avatar Library

    Get PDF
    This paper presents MoveBox an open sourced toolbox for animating motion captured (MoCap) movements onto the Microsoft Rocketbox library of avatars. Motion capture is performed using a single depth sensor, such as Azure Kinect or Windows Kinect V2. Motion capture is performed in real-time using a single depth sensor, such as Azure Kinect or Windows Kinect V2, or extracted from existing RGB videos offline leveraging deep-learning computer vision techniques. Our toolbox enables real-time animation of the user’s avatar by converting the transformations between systems that have different joints and hierarchies. Additional features of the toolbox include recording, playback and looping animations, as well as basic audio lip sync, blinking and resizing of avatars as well as finger and hand animations. Our main contribution is both in the creation of this open source tool as well as the validation on different devices and discussion of MoveBox’s capabilities by end users

    Interfaces for human-centered production and use of computer graphics assets

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore