12 research outputs found

    Egocentric Mapping of Body Surface Constraints

    Get PDF
    The relative location of human body parts often materializes the semantics of on-going actions, intentions and even emotions expressed, or performed, by a human being. However, traditional methods of performance animation fail to correctly and automatically map the semantics of performer postures involving self-body contacts onto characters with different sizes and proportions. Our method proposes an egocentric normalization of the body-part relative distances to preserve the consistency of self contacts for a large variety of human-like target characters. Egocentric coordinates are character independent and encode the whole posture space, i.e., it ensures the continuity of the motion with and without self-contacts. We can transfer classes of complex postures involving multiple interacting limb segments by preserving their spatial order without depending on temporal coherence. The mapping process exploits a low-cost constraint relaxation technique relying on analytic inverse kinematics; thus, we can achieve online performance animation. We demonstrate our approach on a variety of characters and compare it with the state of the art in online retargeting with a user study. Overall, our method performs better than the state of the art, especially when the proportions of the animated character deviates from those of the performer

    One Step from the Locomotion to the Stepping Pattern

    Get PDF
    The locomotion pattern is characterized by a translation displacement mostly occurring along the forward frontal body direction, whereas local repositioning with large re-orientations, i.e. stepping, may induce translations both along the frontal and the lateral body directions (holonomy). We consider here a stepping pattern with initial and final null speeds within a radius of 40% of the body height and re-orientation up to 180°. We propose a robust step detection method for such a context and identify a consistent intra-subject behavior in terms of the choice of starting foot and the number of steps

    Characterizing Embodied Interaction in First and Third Person Perspective Viewpoints

    Get PDF
    Third Person Perspective (3PP) viewpoints have the potential to expand how one perceives and acts in a virtual environment. They offer increased awareness of the posture and of the surrounding of the virtual body as compared to First Person Perspective (1PP). But from another standpoint, 3PP can be considered as less effective for inducing a strong sense of embodiment into a virtual body. Following an experimental paradigm based on full body motion capture and immersive interaction, this study investigates the effect of perspective and of visuomotor synchrony on the sense of embodiment. It provides evidence supporting a high sense of embodiment in both 1PP and 3PP during engaging motor tasks, as well as guidelines for choosing the optimal perspective depending on location of targets

    SPA: Sparse Photorealistic Animation using a single RGB-D camera

    Get PDF
    Photorealistic animation is a desirable technique for computer games and movie production. We propose a new method to synthesize plausible videos of human actors with new motions using a single cheap RGB-D camera. A small database is captured in a usual office environment, which happens only once for synthesizing different motions. We propose a markerless performance capture method using sparse deformation to obtain the geometry and pose of the actor for each time instance in the database. Then, we synthesize an animation video of the actor performing the new motion that is defined by the user. An adaptive model-guided texture synthesis method based on weighted low-rank matrix completion is proposed to be less sensitive to noise and outliers, which enables us to easily create photorealistic animation videos with new motions that are different from the motions in the database. Experimental results on the public dataset and our captured dataset have verified the effectiveness of the proposed method

    Precise and Responsive Performance Animation for Embodied Immersive Interactions

    No full text
    The recent advances in immersive display technologies offer a unique opportunity for providing intuitive interaction techniques easing the user involvement in the Virtual Environment. This trend underlines the great potential of Performance Animation especially for applications with real-time interactions. Capturing the motion of the performer and mapping it onto an avatar are not new problems, but still have several research challenges to be addressed. Ensuring the highest fidelity of the reconstructed postures while maintaining the responsiveness of the interaction is a difficult problem, in particular, when embodying an avatar with different body size and proportions. In line with these requirements, we introduce several posture tracking, reconstruction, representation and retargetting techniques. First, we introduce a semi-automated calibration technique to precisely track the performer's body. Besides computing a digital representation of the performer's skeletal structure, our technique registers the orientation of the body parts and identifies an approximation of the body surface. We propose then a novel parametrization of human limbs that addresses the ill-conditioned cases of analytical inverse kinematics algorithms. We also integrate it within a Jacobian based linearized inverse kinematics framework for obtaining faster convergence. We exploit a low-cost posture reconstruction technique for full-body real time control of an avatar with the same body size and proportions of the performer. We present its usability in two types of applications: Virtual Reality based rehabilitation and immersive motion analysis. Finally, we propose an egocentric normalization of the body-part relative distances to preserve the consistency of self-contacts for a large variety of human-like target characters. Egocentric coordinates are character independent and encode the whole posture space, i.e., it ensures the continuity of the motion with and without self-contacts. We can transfer classes of complex postures involving multiple interacting limb segments by preserving the limb relative spatial order without depending on temporal coherence

    Singularity Free Parametrization of Human Limbs

    No full text
    In this paper we propose the Middle-Axis-Rotation (MAR) parametrization of human limbs that addresses the ill-conditioned cases of analytical Inverse Kinematics (IK) algorithms. The MAR parametrization is singularity-free in the reach space of the human limbs. Unlike the swivel representation, it does not rely on the projection of an additional fixed vector. In addition, we express the joint limits of each joint of the limb in terms of the redundancy of the new decomposition. In the specific case of the upper limb, we analyse the contribution of the clavicle to produce biomechanically meaningful postures. We illustrate various real-time applications of this approach

    A two-arm coordination model for phantom limb pain rehabilitation

    No full text
    Following limb loss, patients usually continue having sensations on their missing limbs as if they were still present. Signicant amount of such sensations are painful and referred as Phantom Limb Pain (PLP). Previous research has shown that providing the patient with the visual feedback of a limb at the place of the missing one in Virtual Reality (VR) can reduce PLP. In this paper we introduce a model to coordinate the arms allowing the exercising of a much broader range of reach tasks for alleviating the PLP more efficiently. Our Two-Arm Coordination Model (TACM) synthesizes the missing limb pose from the instantaneous variations of the intact opposite limb for a given reach task. Moreover, we propose a setup that makes use of a virtual mirror to enhance the full-body awareness of the patient in the virtual space

    Realistic modeling of spectator behavior for soccer videogames with CUDA

    Get PDF
    Soccer has always been one of the most popular videogame genres. When designing a soccer game, designers tend to focus on the game field and game play due to the limited computational resources, and thus the modelling of virtual spectators is paid less attention. In this study we present a novel approach to the modeling of spectator behavior, which treats each spectator as a unique individual. We also propose an independent software layer for sport-based games that simply obtains the game status from the game engine via a simple messaging protocol and computes the spectator behavior accordingly. The result is returned to the game engine, to be used in the animation and rendering of the spectators. Additionally, we offer a customizable spectator knowledge base with well structured XML to minimize coding efforts, while generating individualized behavior. The employed AI is based on fuzzy inference. In order to overcome additional demand for computing realistic spectator behavior, we use GPU parallel computing with CUDA

    The Critical Role of Self-Contact for Embodiment in Virtual Reality

    No full text
    With the broad range of motion capture devices available on the market, it is now commonplace to directly control the limb movement of an avatar during immersion in a virtual environment. Here, we study how the subjective experience of embodying a full-body controlled avatar is influenced by motor alteration and self-contact mismatches. Self-contact is in particular a strong source of passive haptic feedback and we assume it to bring a clear benefit in terms of embodiment. For evaluating this hypothesis, we experimentally manipulate self-contacts and the virtual hand displacement relatively to the body. We introduce these body posture transformations to experimentally reproduce the imperfect or incorrect mapping between real and virtual bodies, with the goal of quantifying the limits of acceptance for distorted mapping on the reported body ownership and agency. We first describe how we exploit egocentric coordinate representations to perform a motion capture ensuring that real and virtual hands coincide whenever the real hand is in contact with the body. Then, we present a pilot study that focuses on quantifying our sensitivity to visuo-tactile mismatches. The results are then used to design our main study with two factors, offset (for self-contact) and amplitude (for movement amplification). Our main result shows that subjects' embodiment remains important, even when an artificially amplified movement of the hand was performed, but provided that correct self-contacts are ensured
    corecore