128 research outputs found

    A Positive-definite Cut-cell Method for Strong Two-way Coupling Between Fluids and Deformable Bodies

    Get PDF
    © ACM, 2017. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Zarifi, O., & Batty, C. (2017). A Positive-definite Cut-cell Method for Strong Two-way Coupling Between Fluids and Deformable Bodies. In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation (p. 7:1–7:11). New York, NY, USA: ACM. https://doi.org/10.1145/3099564.3099572We present a new approach to simulation of two-way coupling between inviscid free surface fluids and deformable bodies that exhibits several notable advantages over previous techniques. By fully incorporating the dynamics of the solid into pressure projection, we simultaneously handle fluid incompressibility and solid elasticity and damping. Thanks to this strong coupling, our method does not suffer from instability, even in very taxing scenarios. Furthermore, use of a cut-cell discretization methodology allows us to accurately apply proper free-slip boundary conditions at the exact solid-fluid interface. Consequently, our method is capable of correctly simulating inviscid tangential flow, devoid of grid artefacts or artificial sticking. Lastly, we present an efficient algebraic transformation to convert the indefinite coupled pressure projection system into a positive-definite form. We demonstrate the efficacy of our proposed method by simulating several interesting scenarios, including a light bath toy colliding with a collapsing column of water, liquid being dropped onto a deformable platform, and a partially liquid-filled deformable elastic sphere bouncing.Natural Sciences and Engineering Research Council of Canad

    Procedural Generation and Rendering of Realistic, Navigable Forest Environments: An Open-Source Tool

    Full text link
    Simulation of forest environments has applications from entertainment and art creation to commercial and scientific modelling. Due to the unique features and lighting in forests, a forest-specific simulator is desirable, however many current forest simulators are proprietary or highly tailored to a particular application. Here we review several areas of procedural generation and rendering specific to forest generation, and utilise this to create a generalised, open-source tool for generating and rendering interactive, realistic forest scenes. The system uses specialised L-systems to generate trees which are distributed using an ecosystem simulation algorithm. The resulting scene is rendered using a deferred rendering pipeline, a Blinn-Phong lighting model with real-time leaf transparency and post-processing lighting effects. The result is a system that achieves a balance between high natural realism and visual appeal, suitable for tasks including training computer vision algorithms for autonomous robots and visual media generation.Comment: 14 pages, 11 figures. Submitted to Computer Graphics Forum (CGF). The application and supporting configuration files can be found at https://github.com/callumnewlands/ForestGenerato

    Physically-Based Droplet Interaction

    Get PDF
    In this paper we present a physically-based model for simulating realistic interactions between liquid droplets in an efficient manner. Our particle-based system recreates the coalescence, separation and fragmentation interactions that occur between colliding liquid droplets and allows systems of droplets to be meaningfully repre- sented by an equivalent number of simulated particles. By consid- ering the interactions specific to liquid droplet phenomena directly, we display novel levels of detail that cannot be captured using other interaction models at a similar scale. Our work combines experi- mentally validated components, originating in engineering, with a collection of novel modifications to create a particle-based interac- tion model for use in the development of mid-to-large scale droplet- based liquid spray effects. We demonstrate this model, alongside a size-dependent drag force, as an extension to a commonly-used ballistic particle system and show how the introduction of these interactions improves the quality and variety of results possible in recreating liquid droplets and sprays, even using these otherwise simple systems

    FDLS: A Deep Learning Approach to Production Quality, Controllable, and Retargetable Facial Performances

    Full text link
    Visual effects commonly requires both the creation of realistic synthetic humans as well as retargeting actors' performances to humanoid characters such as aliens and monsters. Achieving the expressive performances demanded in entertainment requires manipulating complex models with hundreds of parameters. Full creative control requires the freedom to make edits at any stage of the production, which prohibits the use of a fully automatic ``black box'' solution with uninterpretable parameters. On the other hand, producing realistic animation with these sophisticated models is difficult and laborious. This paper describes FDLS (Facial Deep Learning Solver), which is Weta Digital's solution to these challenges. FDLS adopts a coarse-to-fine and human-in-the-loop strategy, allowing a solved performance to be verified and edited at several stages in the solving process. To train FDLS, we first transform the raw motion-captured data into robust graph features. Secondly, based on the observation that the artists typically finalize the jaw pass animation before proceeding to finer detail, we solve for the jaw motion first and predict fine expressions with region-based networks conditioned on the jaw position. Finally, artists can optionally invoke a non-linear finetuning process on top of the FDLS solution to follow the motion-captured virtual markers as closely as possible. FDLS supports editing if needed to improve the results of the deep learning solution and it can handle small daily changes in the actor's face shape. FDLS permits reliable and production-quality performance solving with minimal training and little or no manual effort in many cases, while also allowing the solve to be guided and edited in unusual and difficult cases. The system has been under development for several years and has been used in major movies.Comment: DigiPro '22: The Digital Production Symposiu

    Rectangular Selection of Components in Large 3D Models on the Web

    Get PDF
    We introduce a novel method for rectangular selection of components in large 3D models on the web. Our technique provides an easy to use solution that is developed for renderers with partial fragment shader support such as embedded systems running WebGL. This method was implemented using the Unity 3D game engine within the 3D Repo open source framework running on a web browser. A case study with industrial 3D models of varying complexity and object count shows that such a solution performs within reasonable rendering expectations even on underpowered devices without a dedicated graphics card

    Towards Interactive Photorealistic Rendering

    Get PDF

    Emotion Transfer for 3D Hand and Full Body Motion using StarGAN

    Get PDF
    In this paper, we propose a new data-driven framework for 3D hand and full-body motion emotion transfer. Specifically, we formulate the motion synthesis task as an image-to-image translation problem. By presenting a motion sequence as an image representation, the emotion can be transferred by our framework using StarGAN. To evaluate our proposed method's effectiveness, we first conducted a user study to validate the perceived emotion from the captured and synthesized hand motions. We further evaluate the synthesized hand and full body motions qualitatively and quantitatively. Experimental results show that our synthesized motions are comparable to the captured motions and those created by an existing method in terms of naturalness and visual quality

    Towards Fully Dynamic Surface Illumination in Real-Time Rendering using Acceleration Data Structures

    Get PDF
    The improvements in GPU hardware, including hardware-accelerated ray tracing, and the push for fully dynamic realistic-looking video games, has been driving more research in the use of ray tracing in real-time applications. The work described in this thesis covers multiple aspects such as optimisations, adapting existing offline methods to real-time constraints, and adding effects which were hard to simulate without the new hardware, all working towards a fully dynamic surface illumination rendering in real-time.Our first main area of research concerns photon-based techniques, commonly used to render caustics. As many photons can be required for a good coverage of the scene, an efficient approach for detecting which ones contribute to a pixel is essential. We improve that process by adapting and extending an existing acceleration data structure; if performance is paramount, we present an approximation which trades off some quality for a 2–3× improvement in rendering time. The tracing of all the photons, and especially when long paths are needed, had become the highest cost. As most paths do not change from frame to frame, we introduce a validation procedure allowing the reuse of as many as possible, even in the presence of dynamic lights and objects. Previous algorithms for associating pixels and photons do not robustly handle specular materials, so we designed an approach leveraging ray tracing hardware to allow for caustics to be visible in mirrors or behind transparent objects.Our second research focus switches from a light-based perspective to a camera-based one, to improve the picking of light sources when shading: photon-based techniques are wonderful for caustics, but not as efficient for direct lighting estimations. When a scene has thousands of lights, only a handful can be evaluated at any given pixel due to time constraints. Current selection methods in video games are fast but at the cost of introducing bias. By adapting an acceleration data structure from offline rendering that stochastically chooses a light source based on its importance, we provide unbiased direct lighting evaluation at about 30 fps. To support dynamic scenes, we organise it in a two-level system making it possible to only update the parts containing moving lights, and in a more efficient way.We worked on top of the new ray tracing hardware to handle lighting situations that previously proved too challenging, and presented optimisations relevant for future algorithms in that space. These contributions will help in reducing some artistic constraints while designing new virtual scenes for real-time applications
    • …
    corecore