5 research outputs found

    Framework for embedding physical systems into virtual experiences

    Get PDF
    We present an immersive Virtual Reality (VR) experience through a combination of technologies including a physical rig; a gamelike experience; and a reï¬ned physics model with control. At its heart, the core technology introduces the concept of a physics-based communication that allows force-driven interaction to be shared between the player and game entities in the virtual world. Because the framework is generic and extendable, the application supports a myriad of interaction modes, constrained only by the limitations of the physical rig (see Figure 1). To showcase the technology, we demonstrate a locomoting robot placed in an immersive gamelike setting

    A Sampling Approach to Generating Closely Interacting 3D Pose-pairs from 2D Annotations

    Get PDF
    We introduce a data-driven method to generate a large number of plausible, closely interacting 3D human pose-pairs, for a given motion category, e.g., wrestling or salsa dance. With much difficulty in acquiring close interactions using 3D sensors, our approach utilizes abundant existing video data which cover many human activities. Instead of treating the data generation problem as one of reconstruction, either through 3D acquisition or direct 2D-to-3D data lifting from video annotations, we present a solution based on Markov Chain Monte Carlo (MCMC) sampling. With a focus on efficient sampling over the space of close interactions, rather than pose spaces, we develop a novel representation called interaction coordinates (IC) to encode both poses and their interactions in an integrated manner. Plausibility of a 3D pose-pair is then defined based on the ICs and with respect to the annotated 2D pose-pairs from video. We show that our sampling-based approach is able to efficiently synthesize a large volume of plausible, closely interacting 3D pose-pairs which provide a good coverage of the input 2D pose-pairs

    사람 동작의 마커없는 재구성

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 이제희.Markerless human pose recognition using a single-depth camera plays an important role in interactive graphics applications and user interface design. Recent pose recognition algorithms have adopted machine learning techniques, utilizing a large collection of motion capture data. The effectiveness of the algorithms is greatly influenced by the diversity and variability of training data. Many applications have been developed to use human body as a controller to utilize these pose recognition systems. In many cases, using general props help us perform immersion control of the system. Nevertheless, the human pose and prop recognition system is not yet sufficiently powerful. Moreover, there is a problem such as invisible parts lower the quality of human pose estimation from a single depth camera due to an absence of observed data. In this thesis, we present techniques to manipulate the human motion data for enabling to estimate human pose from a single depth camera. First, we developed method that resamples a collection of human motion data to improve the pose variability and achieve an arbitrary size and level of density in the space of human poses. The space of human poses is high-dimensional and thus brute-force uniform sampling is intractable. We exploit dimensionality reduction and locally stratified sampling to generate either uniform or application-specifically biased distributions in the space of human poses. Our algorithm is learned to recognize such challenging poses such as sit, kneel, stretching and yoga using a remarkably small amount of training data. The recognition algorithm can also be steered to maximize its performance for a specific domain of human poses. We demonstrate that our algorithm performs much better than Kinect SDK for recognizing challenging acrobatic poses, while performing comparably for easy upright standing poses. Second, we find out environmental object which interact with human beings. We proposed a new props recognition system, which can applied on the existing human pose estimation algorithm, and enable to powerful props estimation with human poses at the same times. Our work is widely applicable to various types of controllers system, which deals with the human pose and addition items simultaneously. Finally, we enhance the pose estimation result. All the part of human body cannot be always estimated from the single depth image. In some case, some body parts are occluded by other body parts, and sometimes estimation system fail to success. For solving this problem, we construct novel neural network model which called autoencoder. It is constructed from huge natural pose data. Then it can reconstruct the missing parameter of human pose joint as new correct joint. It can be applied to many different human pose estimation systems to improve their performance.1 Introduction 1 2 Background 9 2.1 Research on Motion Data 9 2.2 Human Pose Estimation 10 2.3 Machine Learning on Human Pose Estimation 11 2.4 Dimension Reduction and Uniform Sampling 12 2.5 Neural Networks on Motion Data 13 3 Markerless Human Pose Recognition System 14 3.1 System Overview 14 3.2 Preprocessing Data Process 15 3.3 Randomized Decision Tree 20 3.4 Joint Estimation Process 22 4 Controllable Sampling Data in the Space of Human Poses 26 4.1 Overview 26 4.2 Locally Stratified Sampling 28 4.3 Experimental Results 34 4.4 Discussion 40 5 Human Pose Estimation with Interacting Prop from Single Depth Image 48 5.1 Introduction 48 5.2 Prop Estimation 50 5.3 Experimental Results 53 5.4 Discussion 57 6 Enhancing the Estimation of Human Pose from Incomplete Joints 58 6.1 Overview 58 6.2 Method 59 6.3 Experimental Result 62 6.4 Discussion 66 7 Conclusion 67 Bibliography 69 초록 81Docto

    Direct Animation Interfaces : an Interaction Approach to Computer Animation

    Get PDF
    Creativity tools for digital media have been largely democratised, offering a range from beginner to expert tools. Yet computer animation, the art of instilling life into believable characters and fantastic worlds, is still a highly sophisticated process restricted to the spheres of expert users. This is largely due to the methods employed: in keyframe animation dynamics are indirectly specified over abstract descriptions, while performance animation suffers from inflexibility due to a high technological overhead. The reverse trend in human-computer interaction to make interfaces more direct, intuitive, and natural to use has so far hardly touched the animation world: decades of interaction research have scarcely been linked to research and development of animation techniques. The hypothesis of this work is that an interaction approach to computer animation can inform the design and development of novel animation techniques. Three goals are formulated to illustrate the validity of this thesis. Computer animation methods and interfaces must be embedded in an interaction context. The insights this brings for designing next generation animation tools must be examined and formalised. The practical consequences for the development of motion creation and editing tools must be demonstrated with prototypes that are more direct, efficient, easy-to-learn, and flexible to use. The foundation of the procedure is a conceptual framework in the form of a comprehensive discussion of the state of the art, a design space of interfaces for time-based visual media, and a taxonomy for mappings between user and medium space-time. Based on this, an interaction-centred analysis of computer animation culminates in the concept of direct animation interfaces and guidelines for their design. These guidelines are tested in two point designs for direct input devices. The design, implementation and test of a surface-based performance animation tool takes a system approach, addressing interaction design issues as well as challenges in extending current software architectures to support novel forms of animation control. The second, a performance timing technique, shows how concepts from video browsing can be applied to motion editing for more direct and efficient animation timing
    corecore