455 research outputs found

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Interactive Assembly and Animation of 3D Digital Garments

    Get PDF
    We present a novel real-time tool for sewing together 2D patterns, enabling quick assembly of visually plausible, interactively animated garments for virtual characters. The process is assisted by ad-hoc visual hints and allows designers to import 2D patterns from any CAD-tool, connect them using seams around a 3D character with any body type, and assess the overall quality during the character animation. The cloth is numerically simulated including robust modeling of contact of the cloth with itself and with the character\u27s body. Overall, our tool allows for fast prototyping of virtual garments, achieving immediate feedback on their behaviour and visual quality on an animated character, in effect speeding up the content production pipeline for visual effects applications involving clothed characters

    Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing

    Full text link
    Despite recent progress in developing animatable full-body avatars, realistic modeling of clothing - one of the core aspects of human self-expression - remains an open challenge. State-of-the-art physical simulation methods can generate realistically behaving clothing geometry at interactive rates. Modeling photorealistic appearance, however, usually requires physically-based rendering which is too expensive for interactive applications. On the other hand, data-driven deep appearance models are capable of efficiently producing realistic appearance, but struggle at synthesizing geometry of highly dynamic clothing and handling challenging body-clothing configurations. To this end, we introduce pose-driven avatars with explicit modeling of clothing that exhibit both photorealistic appearance learned from real-world data and realistic clothing dynamics. The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry. Our core contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations. We conduct a thorough evaluation of our model and demonstrate diverse animation results on several subjects and different types of clothing. Unlike previous work on photorealistic full-body avatars, our approach can produce much richer dynamics and more realistic deformations even for many examples of loose clothing. We also demonstrate that our formulation naturally allows clothing to be used with avatars of different people while staying fully animatable, thus enabling, for the first time, photorealistic avatars with novel clothing.Comment: SIGGRAPH Asia 2022 (ACM ToG) camera ready. The supplementary video can be found on https://research.facebook.com/publications/dressing-avatars-deep-photorealistic-appearance-for-physically-simulated-clothing

    Human Factors Simulation Research at the University of Pennsylvania

    Get PDF
    Jack is a Silicon Graphics Iris 4D workstation-based system for the definition, manipulation, animation, and human factors performance analysis of simulated human figures. Built on a powerful representation for articulated figures, Jack offers the interactive user a simple, intuitive, and yet extremely capable interface into any 3-D articulated world. Jack incorporates sophisticated systems for anthropometric human figure generation, multiple limb positioning under constraints, view assessment, and strength model-based performance simulation of human figures. Geometric workplace models may be easily imported into Jack. Various body geometries may be used, from simple polyhedral volumes to contour-scanned real figures. High quality graphics of environments and clothed figures are easily obtained. Descriptions of some work in progress are also included

    Populating 3D Scenes by Learning Human-Scene Interaction

    Full text link
    Humans live within a 3D space and constantly interact with it to perform tasks. Such interactions involve physical contact between surfaces that is semantically meaningful. Our goal is to learn how humans interact with scenes and leverage this to enable virtual characters to do the same. To that end, we introduce a novel Human-Scene Interaction (HSI) model that encodes proximal relationships, called POSA for "Pose with prOximitieS and contActs". The representation of interaction is body-centric, which enables it to generalize to new scenes. Specifically, POSA augments the SMPL-X parametric human body model such that, for every mesh vertex, it encodes (a) the contact probability with the scene surface and (b) the corresponding semantic scene label. We learn POSA with a VAE conditioned on the SMPL-X vertices, and train on the PROX dataset, which contains SMPL-X meshes of people interacting with 3D scenes, and the corresponding scene semantics from the PROX-E dataset. We demonstrate the value of POSA with two applications. First, we automatically place 3D scans of people in scenes. We use a SMPL-X model fit to the scan as a proxy and then find its most likely placement in 3D. POSA provides an effective representation to search for "affordances" in the scene that match the likely contact relationships for that pose. We perform a perceptual study that shows significant improvement over the state of the art on this task. Second, we show that POSA's learned representation of body-scene interaction supports monocular human pose estimation that is consistent with a 3D scene, improving on the state of the art. Our model and code are available for research purposes at https://posa.is.tue.mpg.de
    • …
    corecore