13 research outputs found

    Doctor of Philosophy in Computing

    Get PDF
    dissertationPhysics-based animation has proven to be a powerful tool for creating compelling animations for film and games. Most techniques in graphics are based on methods developed for predictive simulation for engineering applications; however, the goals for graphics applications are dramatically different than the goals of engineering applications. As a result, most physics-based animation tools are difficult for artists to work with, providing little direct control over simulation results. In this thesis, we describe tools for physics-based animation designed with artist needs and expertise in mind. Most materials can be modeled as elastoplastic: they recover from small deformations, but large deformations permanently alter their rest shape. Unfortunately, large plastic deformations, common in graphical applications, cause simulation instabilities if not addressed. Most elastoplastic simulation techniques in graphics rely on a finite-element approach where objects are discretized into a tetrahedral mesh. Using these approaches, maintaining simulation stability during large plastic flows requires remeshing, a complex and computationally expensive process. We introduce a new point-based approach that does not rely on an explicit mesh and avoids the expense of remeshing. Our approach produces comparable results with much lower implementation complexity. Points are a ubiquitous primitive for many effects, so our approach also integrates well with existing artist pipelines. Next, we introduce a new technique for animating stylized images which we call Dynamic Sprites. Artists can use our tool to create digital assets that interact in a natural, but stylized, way in virtual environments. In order to support the types of nonphysical, exaggerated motions often desired by artists, our approach relies on a heavily modified deformable body simulator, equipped with a set of new intuitive controls and an example-based deformation model. Our approach allows artists to specify how the shape of the object should change as it moves and collides in interactive virtual environments. Finally, we introduce a new technique for animating destructive scenes. Our approach is built on the insight that the most important visual aspects of destruction are plastic deformation and fracture. Like with Dynamic Sprites, we use an example-based model of deformation for intuitive artist control. Our simulator treats objects as rigid when computing dynamics but allows them to deform plastically and fracture in between timesteps based on interactions with the other objects. We demonstrate that our approach can efficiently animate the types of destructive scenes common in film and games. These animation techniques are designed to exploit artist expertise to ease creation of complex animations. By using artist-friendly primitives and allowing artists to provide characteristic deformations as input, our techniques enable artists to create more compelling animations, more easily

    Long-Term Memory Motion-Compensated Prediction

    Get PDF
    Long-term memory motion-compensated prediction extends the spatial displacement vector utilized in block-based hybrid video coding by a variable time delay permitting the use of more frames than the previously decoded one for motion compensated prediction. The long-term memory covers several seconds of decoded frames at the encoder and decoder. The use of multiple frames for motion compensation in most cases provides significantly improved prediction gain. The variable time delay has to be transmitted as side information requiring an additional bit rate which may be prohibitive when the size of the long-term memory becomes too large. Therefore, we control the bit rate of the motion information by employing rate-constrained motion estimation. Simulation results are obtained by integrating long-term memory prediction into an H.263 codec. Reconstruction PSNR improvements up to 2 dB for the Foreman sequence and 1.5 dB for the Mother–Daughter sequence are demonstrated in comparison to the TMN-2.0 H.263 coder. The PSNR improvements correspond to bit-rate savings up to 34 and 30%, respectively. Mathematical inequalities are used to speed up motion estimation while achieving full prediction gain

    Robust and fast global motion estimation for arbitrarily shaped video objects in MPEG-4

    Get PDF
    Centre for Multimedia Signal Processing, Department of Electronic and Information EngineeringRefereed conference paper2004-2005 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Projective Dynamics: Fusing Constraint Projections for Fast Simulation

    Get PDF
    We present a new method for implicit time integration of physical systems. Our approach builds a bridge between nodal Finite Element methods and Position Based Dynamics, leading to a simple, efficient, robust, yet accurate solver that supports many different types of constraints. We propose specially designed energy potentials that can be solved efficiently using an alternating optimization approach. Inspired by continuum mechanics, we derive a set of continuumbased potentials that can be efficiently incorporated within our solver. We demonstrate the generality and robustness of our approach in many different applications ranging from the simulation of solids, cloths, and shells, to example-based simulation. Comparisons to Newton-based and Position Based Dynamics solvers highlight the benefits of our formulation

    Fast gradient methods based on global motion estimation for video compression

    Full text link

    Video Game Design in the MBA Curriculum: An Experiential Learning Approach for Teaching Design Thinking

    Get PDF
    In the spirit of design thinking, we have developed a “hands-on” video game design workshop intended to be used for an MBA course on design thinking. This novel approach to teaching complex concepts and skills to business students has been received with enthusiasm, and it provides a unique and memorable experience for students to draw on as they encounter situations in which they will apply design thinking in the future. Additionally, student-produced games and student reflections on the workshops provide initial evidence of the value of teaching design thinking through this type of experiential method. In this article we review key design thinking concepts, report on our continuing efforts to incorporate these principles into video game design workshops in the MBA curriculum, and conclude with reflections on improvements for future iterations in hopes that these lesson plans will be shared and will add value to other institutions teaching design thinking. Workshop lesson plans and student projects can be found online at http://www.kolobkreations.com/GDWweb/GDWHome.html

    Efficient and flexible deformation representation for data-driven surface modeling

    Get PDF
    Effectively characterizing the behavior of deformable objects has wide applicability but remains challenging. We present a new rotation-invariant deformation representation and a novel reconstruction algorithm to accurately reconstruct the positions and local rotations simultaneously. Meshes can be very efficiently reconstructed from our representation by matrix pre-decomposition, while, at the same time, hard or soft constraints can be flexibly specified with only positions of handles needed. Our approach is thus particularly suitable for constrained deformations guided by examples, providing significant benefits over state-of-the-art methods. Based on this, we further propose novel data-driven approaches to mesh deformation and non-rigid registration of deformable objects. Both problems are formulated consistently as finding an optimized model in the shape space that satisfies boundary constraints, either specified by the user, or according to the scan. By effectively exploiting the knowledge in the shape space, our method produces realistic deformation results in real-time and produces high quality registrations from a template model to a single noisy scan captured using a low-quality depth camera, outperforming state-of-the-art methods

    El Secreto de la PirĂĄmide: Experiencia audiovisual interactiva centrada en mecĂĄnicas inteligentes

    Get PDF
    El objetivo de este trabajo de fin de grado es crear un videojuego desde cero hasta conseguir un producto completamente jugable y cuya dificultad se adapte a la habilidad de cualquier jugador. El juego se desarrollarĂĄ en un ambiente 2D de aspecto retro con el fin de abarcar todas las facetas creativas posibles

    Realtime Face Tracking and Animation

    Get PDF
    Capturing and processing human geometry, appearance, and motion is at the core of computer graphics, computer vision, and human-computer interaction. The high complexity of human geometry and motion dynamics, and the high sensitivity of the human visual system to variations and subtleties in faces and bodies make the 3D acquisition and reconstruction of humans in motion a challenging task. Digital humans are often created through a combination of 3D scanning, appearance acquisition, and motion capture, leading to stunning results in recent feature films. However, these methods typically require complex acquisition systems and substantial manual post-processing. As a result, creating and animating high-quality digital avatars entails long turn-around times and substantial production costs. Recent technological advances in RGB-D devices, such as Microsoft Kinect, brought new hopes for realtime, portable, and affordable systems allowing to capture facial expressions as well as hand and body motions. RGB-D devices typically capture an image and a depth map. This permits to formulate the motion tracking problem as a 2D/3D non-rigid registration of a deformable model to the input data. We introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. This led to unprecedented face tracking quality on a low cost consumer level device. The main drawback of this approach in the context of consumer applications is the need for an offline user-specific training. Robust and efficient tracking is achieved by building an accurate 3D expression model of the user's face who is scanned in a predefined set of facial expressions. We extended this approach removing the need of a user-specific training or calibration, or any other form of manual assistance, by modeling online a 3D user-specific dynamic face model. In complement of a realtime face tracking and modeling algorithm, we developed a novel system for animation retargeting that allows learning a high-quality mapping between motion capture data and arbitrary target characters. We addressed one of the main challenges of existing example-based retargeting methods, the need for a large number of accurate training examples to define the correspondence between source and target expression spaces. We showed that this number can be significantly reduced by leveraging the information contained in unlabeled data, i.e. facial expressions in the source or target space without corresponding poses. Finally, we present a novel realtime physics-based animation technique allowing to simulate a large range of deformable materials such as fat, flesh, hair, or muscles. This approach could be used to produce more lifelike animations by enhancing the animated avatars with secondary effects. We believe that the realtime face tracking and animation pipeline presented in this thesis has the potential to inspire numerous future research in the area of computer-generated animation. Already, several ideas presented in thesis have been successfully used in industry and this work gave birth to the startup company faceshift AG
    corecore