7 research outputs found

    On Prism-based Motion Blur and Locking-proof Tetrahedra

    Get PDF
    Motion blur is an important visual effect in computer graphics for both real-time, interactive, and offline applications. Current methods offer either slow and accurate solutions for offline ray tracing applications, or fast and inaccurate solutions for real-time applications. This thesis is a collection of three papers, two of which address the need for motion blur solutions that cater to applications that need to be accurate and as well as interactive, and a third that addresses the problem of locking in standard FEM simulations. In short, this thesis deals with the problem of representing continuous motion in a discrete setting.In Paper I, we implement a GPU based fast analytical motion blur renderer. Using ray/triangular prism intersections to determine triangle visibility and shading, we achieve interactive frame rates.In Paper II, we show and address the limitations of using prisms as approximations of the triangle swept volume. A hybrid method of prism intersections and time-dependent edge equations is used to overcome the limitations of Paper I.In Paper III, we provide a solution that alleviates volumetric locking in standard Neo-Hookean FEM simulations without resorting to higher order interpolation

    Implementasi Filter Rekonstruksi untuk Menghasilkan Efek Motion Blur pada Game Engine Urho3D

    Get PDF
    Motion blur merupakan salah satu efek kamera yang muncul dikarenakan adanya waktu exposure kamera. Efek visual ini biasa disimulasikan pada film dan game untuk menambah kesan realistis. Beberapa tahun terakhir ini, motion blur mulai umum tersedia pada game engine modern. Sayangnya karena berbagai keterbatasan, beberapa game engine open source masih belum menyediakan motion blur, salah satunya Urho3D. Tugas akhir ini mencoba mengimplementasi filter rekonstruksi untuk membuat efeke motion blur pada Urho3D. Filter rekonstruksi merupakan algoritma gather yang menyimulasikan efek motion blur secara realtime. Dari hasil implementasi di tugas akhir ini menunjukkan filter rekonstruksi membutuhkan fine-tuning parameter untuk menghasilkan hasil yang cocok untuk situasi scene dan target hardware. ========================================================================================================== Motion blur is a visual effect caused by natural camera exposure time. This kind of effect is common used in film and video games to achieve more realistic pictures. Recently, motion blur is common effect integrated on many modern game engine. Unfortunately, because of limited developer time and power, it still uncommon to see motion blur implemented on open source game engine, one of them is Urho3D. This final project will try to implement reconstruction filter to achieve motion blur effect on Urho3D. Reconstruction filter is an gather algorithm to simulate plausible motion blur in realtime. This final project implementation resulting the need of finetuning parameter to achieve better motion blur effect to suit specific scene and target hardware

    Fast Analytical Motion Blur with Transparency

    Get PDF
    We introduce a practical parallel technique to achieve real-time motion blur for textured and semi-transparent triangles with high accuracy using modern commodity GPUs. In our approach, moving triangles are represented as prisms. Each prism is bounded by the initial and final position of the triangle during one animation frame and three bilinear patches on the sides. Each prism covers a number of pixels for a certain amount of time according to its trajectory on the screen. We efficiently find, store and sort the list of prisms covering each pixel including the amount of time the pixel is covered by each prism. This information, together with the color, texture, normal, and transparency of the pixel, is used to resolve its final color. We demonstrate the performance, scalability, and generality of our approach in a number of test scenarios, showing that it achieves a visual quality practically indistinguishable from the ground truth in a matter of just a few milliseconds, including rendering of textured and transparent objects. A supplementary video has been made available online

    Deep Shading: Convolutional Neural Networks for Screen-Space Shading

    No full text
    In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images

    Using inverse kinematics for procedurally generated animation

    Get PDF
    In this diploma thesis we used the FABRIK inverse kinematics algorithm in combination with physics library Bullet and C++ programming language to create a skeletal animation system with the ability to generate dynamic and procedural animation poses in real time. At the beginning we talked about different types of animations found in computer graphics and their pros and cons. We then focused on the implementation of the skeletal animation system. After that we talked about inverse kinematics and described the FABRIK algorithm. With the acquired knowledge we then proceeded to procedural generation of body poses with the help of the Bullet physics library. Finally, we talked about the visualization of skeletal animation and in addition to the problems encountered, we also described possible improvements to the entire system

    Using inverse kinematics for procedurally generated animation

    Get PDF
    In this diploma thesis we used the FABRIK inverse kinematics algorithm in combination with physics library Bullet and C++ programming language to create a skeletal animation system with the ability to generate dynamic and procedural animation poses in real time. At the beginning we talked about different types of animations found in computer graphics and their pros and cons. We then focused on the implementation of the skeletal animation system. After that we talked about inverse kinematics and described the FABRIK algorithm. With the acquired knowledge we then proceeded to procedural generation of body poses with the help of the Bullet physics library. Finally, we talked about the visualization of skeletal animation and in addition to the problems encountered, we also described possible improvements to the entire system

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes
    corecore