325 research outputs found

    Volumetric intelligence: A framework for the creation of interactive volumetric captured characters

    Get PDF
    Virtual simulation of human faces and facial movements has challenged media artists and computer scientists since the first realistic 3D renderings of a human face by Fred Parke in 1972. Today, a range of software and techniques are available for modelling virtual characters and their facial behavior in immersive environments, such as computer games or storyworlds. However, applying these techniques often requires large teams with multidisciplinary expertise, extensive amount of manual labor, as well as financial conditions that are not typically available for individual media artists. This thesis work demonstrates how an individual artist may create humanlike virtual characters – specifically their facial behavior – in a relatively fast and automated manner. The method is based on volumetric capturing, or photogrammetry, of a set of facial expressions from a real person using a multi-camera setup, and further applying open source and accessible 3D reconstruction and re-topology techniques and software. Furthermore, the study discusses possibilities of utilizing contemporary game engines and applications for building settings that allow real-time interaction between the user and virtual characters. The thesis documents an innovative framework for the creation of a virtual character captured from a real person, that can be presented and driven in real-time, without the need of a specialized team, high budget or intensive manual labor. This workflow is suitable for research groups, independent teams and individuals seeking for the creation of immersive and real-time experiences and experiments using virtual humanlike characters

    Controlling liquids using meshes

    Get PDF
    We present an approach for artist-directed animation of liquids using multiple levels of control over the simulation, ranging from the overall tracking of desired shapes to highly detailed secondary effects such as dripping streams, separating sheets of fluid, surface waves and ripples. The first portion of our technique is a volume preserving morph that allows the animator to produce a plausible fluid-like motion from a sparse set of control meshes. By rasterizing the resulting control meshes onto the simulation grid, the mesh velocities act as boundary conditions during the projection step of the fluid simulation. We can then blend this motion together with uncontrolled fluid velocities to achieve a more relaxed control over the fluid that captures natural inertial effects. Our method can produce highly detailed liquid surfaces with control over sub-grid details by using a mesh-based surface tracker on top of a coarse grid-based fluid simulation. We can create ripples and waves on the fluid surface attracting the surface mesh to the control mesh with spring-like forces and also by running a wave simulation over the surface mesh. Our video results demonstrate how our control scheme can be used to create animated characters and shapes that are made of water

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    The design and engineering of variable character morphology

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September 2001."August 2001."Includes bibliographical references (p. 79-82).This thesis explores the technical challenges and the creative possibilities afforded by a computational system that allows behavioral control over the appearance of a character's morphology. Working within the framework of the Synthetic Character's behavior architecture, a system has been implemented that allows a character's internal state to drive changes in its morphology. The system allows for real-time, multi-target blending between body geometries, skeletons, and animations. The results reflect qualitative changes in the character's appearance and state. Through the thesis character sketches are used to demonstrate the potential of this integrated approach to behavior and character morphology.Scott Michael Eaton.S.M

    Animation, Simulation, and Control of Soft Characters using Layered Representations and Simplified Physics-based Methods

    Get PDF
    Realistic behavior of computer generated characters is key to bringing virtual environments, computer games, and other interactive applications to life. The plausibility of a virtual scene is strongly influenced by the way objects move around and interact with each other. Traditionally, actions are limited to motion capture driven or pre-scripted motion of the characters. Physics enhance the sense of realism: physical simulation is required to make objects act as expected in real life. To make gaming and virtual environments truly immersive,it is crucial to simulate the response of characters to collisions and to produce secondary effects such as skin wrinkling and muscle bulging. Unfortunately, existing techniques cannot generally achieve these effects in real time, do not address the coupled response of a character's skeleton and skin to collisions nor do they support artistic control. In this dissertation, I present interactive algorithms that enable physical simulation of deformable characters with high surface detail and support for intuitive deformation control. I propose a novel unified framework for real-time modeling of soft objects with skeletal deformations and surface deformation due to contact, and their interplay for object surfaces with up to tens of thousands of degrees of freedom.I make use of layered models to reduce computational complexity. I introduce dynamic deformation textures, which map three dimensional deformations in the deformable skin layer to a two dimensional domain for extremely efficient parallel computation of the dynamic elasticity equations and optimized hierarchical collision detection. I also enhance layered models with responsive contact handling, to support the interplay between skeletal motion and surface contact and the resulting two-way coupling effects. Finally, I present dynamic morph targets, which enable intuitive control of dynamic skin deformations at run-time by simply sculpting pose-specific surface shapes. The resulting framework enables real-time and directable simulation of soft articulated characters with frictional contact response, capturing the interplay between skeletal dynamics and complex,non-linear skin deformations

    Design Fiction Diegetic Prototyping: A Research Framework for Visualizing Service Innovations

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Purpose: This paper presents a design fiction diegetic prototyping methodology and research framework for investigating service innovations that reflect future uses of new and emerging technologies. Design/methodology/approach: Drawing on speculative fiction, we propose a methodology that positions service innovations within a six-stage research development framework. We begin by reviewing and critiquing designerly approaches that have traditionally been associated with service innovations and futures literature. In presenting our framework, we provide an example of its application to the Internet of Things (IoT), illustrating the central tenets proposed and key issues identified. Findings: The research framework advances a methodology for visualizing future experiential service innovations, considering how realism may be integrated into a designerly approach. Research limitations/implications: Design fiction diegetic prototyping enables researchers to express a range of ‘what if’ or ‘what can it be’ research questions within service innovation contexts. However, the process encompasses degrees of subjectivity and relies on knowledge, judgment and projection. Practical implications: The paper presents an approach to devising future service scenarios incorporating new and emergent technologies in service contexts. The proposed framework may be used as part of a range of research designs, including qualitative, quantitative and mixed method investigations. Originality: Operationalizing an approach that generates and visualizes service futures from an experiential perspective contributes to the advancement of techniques that enables the exploration of new possibilities for service innovation research

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    PHYSICS-BASED SHAPE MORPHING AND PACKING FOR LAYOUT DESIGN

    Get PDF
    The packing problem, also named layout design, has found wide applications in the mechanical engineering field. In most cases, the shapes of the objects do not change during the packing process. However, in some applications such as vehicle layout design, shape morphing may be required for some specific components (such as water and fuel reservoirs). The challenge is to fit a component of sufficient size in the available space in a crowded environment (such as the vehicle under-hood) while optimizing the overall performance objectives of the vehicle and improving design efficiency. This work is focused on incorporating component shape design into the layout design process, i.e. finding the optimal locations and orientations of all the components within a specified volume, as well as the suitable shapes of selected ones. The first major research issue is to identify how to efficiently and accurately morph the shapes of components respecting the functional constraints. Morphing methods depend on the geometrical representation of the components. The traditional parametric representation may lend itself easily to modification, but it relies on assumption that the final approximate shape of the object is known, and therefore, the morphing freedom is very limited. To morph objects whose shape can be changed arbitrarily in layout design, a mesh based morphing method based on a mass-spring physical model is developed. For this method, there is no need to explicitly specify the deformations and the shape morphing freedom is not confined. The second research issue is how to incorporate component shape design into a layout design process. Handling the complete problem at once may be beyond our reach,therefore decomposition and multilevel approaches are used. At the system level, a genetic algorithm (GA) is applied to find the positions and orientations of the objects, while at the sub-system or component level, morphing is accomplished for select components. Although different packing applications may have different objectives and constraints, they all share some common issues. These include CAD model preprocessing for packing purpose, data format translation during the packing process if performance evaluation and morphing use different representation methods, efficiency of collision detection methods, etc. These common issues are all brought together under the framework of a general methodology for layout design with shape morphing. Finally, practical examples of vehicle under-hood/underbody layout design with the mass-spring physical model based shape morphing are demonstrated to illustrate the proposed approach before concluding and proposing continuing work

    Creative tools for producing realistic 3D facial expressions and animation

    Get PDF
    Creative exploration of realistic 3D facial animation is a popular but very challenging task due to the high level knowledge and skills required. This forms a barrier for creative individuals who have limited technical skills but wish to explore their creativity in this area. This paper proposes a new technique that facilitates users’ creative exploration by hiding the technical complexities of producing facial expressions and animation. The proposed technique draws on research from psychology, anatomy and employs Autodesk Maya as a use case by developing a creative tool, which extends Maya’s Blend Shape Editor. User testing revealed that novice users in the creative media, employing the proposed tool can produce rich and realistic facial expressions that portray new interesting emotions. It reduced production time by 25% when compared to Maya and by 40% when compared to 3DS Max equivalent tools
    • …
    corecore