74 research outputs found

    Isosurface modelling of soft objects in computer graphics.

    Get PDF
    There are many different modelling techniques used in computer graphics to describe a wide range of objects and phenomena. In this thesis, details of research into the isosurface modelling technique are presented. The isosurface technique is used in conjunction with more traditional modelling techniques to describe the objects needed in the different scenes of an animation. The isosurface modelling technique allows the description and animation of objects that would be extremely difficult, or impossible to describe using other methods. The objects suitable for description using isosurface modelling are soft objects. Soft objects merge elegantly with each other, pull apart, bubble, ripple and exhibit a variety of other effects. The representation was studied in three phases of a computer animation project: modelling of the objects; animation of the objects; and the production of the images. The research clarifies and presents many algorithms needed to implement the isosurface representation in an animation system. The creation of a hierarchical computer graphics animation system implementing the isosurface representation is described. The scalar fields defining the isosurfaces are represented using a scalar field description language, created as part of this research, which is automatically generated from the hierarchical description of the scene. This language has many techniques for combining and building the scalar field from a variety of components. Surface attributes of the objects are specified within the graphics system. Techniques are described which allow the handling of these attributes along with the scalar field calculation. Many animation techniques specific to the isosurface representation are presented. By the conclusion of the research, a graphics system was created which elegantly handles the isosurface representation in a wide variety of animation situations. This thesis establishes that isosurface modelling of soft objects is a powerful and useful technique which has wide application in the computer graphics community

    Real-time transition texture synthesis for terrains.

    Get PDF
    Depicting the transitions where differing material textures meet on a terrain surface presents a particularly unique set of challenges in the field of real-time rendering. Natural landscapes are inherently irregular and composed of complex interactions between many different material types of effectively endless detail and variation. Although consumer grade graphics hardware is becoming ever increasingly powerful with each successive generation, terrain texturing remains a trade-off between realism and the computational resources available. Technological constraints aside, there is still the challenge of generating the texture resources to represent terrain surfaces which can often span many hundreds or even thousands of square kilometres. To produce such textures by hand is often impractical when operating on a restricted budget of time and funding. This thesis presents two novel algorithms for generating texture transitions in realtime using automated processes. The first algorithm, Feature-Based Probability Blending (FBPB), automates the task of generating transitions between material textures containing salient features. As such features protrude through the terrain surface FBPB ensures that the topography of these features is maintained at transitions in a realistic manner. The transitions themselves are generated using a probabilistic process that also dynamically adds wear and tear to introduce high frequency detail and irregularity at the transition contour. The second algorithm, Dynamic Patch Transitions (DPT), extends FBPB by applying the probabilistic transition approach to material textures that contain no salient features. By breaking up texture space into a series of layered patches that are either rendered or discarded on a probabilistic basis, the contour of the transition is greatly increased in resolution and irregularity. When used in conjunction with high frequency detail techniques, such as alpha masking, DPT is capable of producing endless, detailed, irregular transitions without the need for artistic input

    EDEN: Multimodal Synthetic Dataset of Enclosed GarDEN Scenes

    Full text link
    Multimodal large-scale datasets for outdoor scenes are mostly designed for urban driving problems. The scenes are highly structured and semantically different from scenarios seen in nature-centered scenes such as gardens or parks. To promote machine learning methods for nature-oriented applications, such as agriculture and gardening, we propose the multimodal synthetic dataset for Enclosed garDEN scenes (EDEN). The dataset features more than 300K images captured from more than 100 garden models. Each image is annotated with various low/high-level vision modalities, including semantic segmentation, depth, surface normals, intrinsic colors, and optical flow. Experimental results on the state-of-the-art methods for semantic segmentation and monocular depth prediction, two important tasks in computer vision, show positive impact of pre-training deep networks on our dataset for unstructured natural scenes. The dataset and related materials will be available at https://lhoangan.github.io/eden.Comment: Accepted for publishing at WACV 202

    Visually accurate multi-field weather visualization

    Get PDF
    Journal ArticleWeather visualization is a difficult problem because it comprises volumetric multi-field data and traditional surface-based approaches obscure details of the complex three-dimensional structure of cloud dynamics. Therefore, visually accurate volumetric multi-field visualization of storm scale and cloud scale data is needed to effectively and efficiently communicate vital information to weather forecasters, improving storm forecasting, atmospheric dynamics models, and weather spotter training. We have developed a new approach to multi-field visualization that uses field specific, physically-based opacity, transmission, and lighting calculations per-field for the accurate visualization of storm and cloud scale weather data. Our approach extends traditional transfer function approaches to multi-field data and to volumetric illumination and scattering

    Interactive translucent volume rendering and procedural modeling

    Get PDF
    Journal ArticleDirect volume rendering is a commonly used technique in visualization applications. Many of these applications require sophisticated shading models to capture subtle lighting effects and characteristics of volume metric data and materials. Many common objects and natural phenomena exhibit visual quality that cannot be captured using simple lighting models or cannot be solved at interactive rates using more sophisticated methods. We present a simple yet effective interactive shading model which captures volumetric light attenuation effects to produce volumetric shadows and the subtle appearance of translucency. We also present a technique for volume displacement or perturbation that allows realistic interactive modeling of high frequency detail for real and synthetic volumetric data

    Artist-Configurable Node-Based Approach to Generate Procedural Brush Stroke Textures for Digital Painting

    Get PDF
    Digital painting is the field of software designed to provide artists a virtual medium to emulate the experience and results of physical drawing. Several hardware and software components come together to form a whole workflow, ranging from the physical input devices, to the stroking process, to the texture content authorship. This thesis explores an artist-friendly approach to synthesize the textures that give life to digital brush strokes. Most painting software provides a limited library of predefined brush textures. They aim to offer styles approximating physical media like paintbrushes, pencils, markers, and airbrushes. Often these are static bitmap textures that are stamped onto the canvas at repeating intervals, causing discernible repetition artifacts. When more variety is desired, artists often download commercially available brush packs that expand the library of styles. However, included and supplemental brush packs are not easily artist-customizable. In recent years, a separate field of digital art tooling has seen the popular growth of node-based procedural content generation. 3D models, shaders, and materials are commonly authored by artists using functions that can be linked together in a visual programming environment called a node graph. In this work, the feasibility is tested of using a node graph to procedurally generate highly customizable brush textures. The system synthesizes textures that adapt to parameters like pen pressure and stretch along the full length of each brush stroke instead of stamping repetitively. The result is a more flexible and artist-friendly way to define, share, and tweak brush textures used in digital painting

    Web-Based Dynamic Paintings: Real-Time Interactive Artworks in Web Using a 2.5D Pipeline

    Full text link
    In this work, we present a 2.5D pipeline approach to creating dynamic paintings that can be re-rendered interactively in real-time on the Web. Using this 2.5D approach, any existing simple painting such as portraits can be turned into an interactive dynamic web-based artwork. Our interactive system provides most global illumination effects such as reflection, refraction, shadow, and subsurface scattering by processing images. In our system, the scene is defined only by a set of images. These include (1) a shape image, (2) two diffuse images, (3) a background image, (4) one foreground image, and (5) one transparency image. A shape image is either a normal map or a height. Two diffuse images are usually hand-painted. They are interpolated using illumination information. The transparency image is used to define the transparent and reflective regions that can reflect the foreground image and refract the background image, both of which are also hand-drawn. This framework, which mainly uses hand-drawn images, provides qualitatively convincing painterly global illumination effects such as reflection and refraction. We also include parameters to provide additional artistic controls. For instance, using our piecewise linear Fresnel function, it is possible to control the ratio of reflection and refraction. This system is the result of a long line of research contributions. On the other hand, the art-directed Fresnel function that provides physically plausible compositing of reflection and refraction with artistic control is completely new. Art-directed warping equations that provide qualitatively convincing refraction and reflection effects with linearized artistic control are also new. You can try our web-based system for interactive dynamic real-time paintings at http://mock3d.tamu.edu/.Comment: 22 page

    Procedural Generation and Rendering of Realistic, Navigable Forest Environments: An Open-Source Tool

    Full text link
    Simulation of forest environments has applications from entertainment and art creation to commercial and scientific modelling. Due to the unique features and lighting in forests, a forest-specific simulator is desirable, however many current forest simulators are proprietary or highly tailored to a particular application. Here we review several areas of procedural generation and rendering specific to forest generation, and utilise this to create a generalised, open-source tool for generating and rendering interactive, realistic forest scenes. The system uses specialised L-systems to generate trees which are distributed using an ecosystem simulation algorithm. The resulting scene is rendered using a deferred rendering pipeline, a Blinn-Phong lighting model with real-time leaf transparency and post-processing lighting effects. The result is a system that achieves a balance between high natural realism and visual appeal, suitable for tasks including training computer vision algorithms for autonomous robots and visual media generation.Comment: 14 pages, 11 figures. Submitted to Computer Graphics Forum (CGF). The application and supporting configuration files can be found at https://github.com/callumnewlands/ForestGenerato

    Model for volume lighting and modeling

    Get PDF
    Journal ArticleAbstract-Direct volume rendering is a commonly used technique in visualization applications. Many of these applications require sophisticated shading models to capture subtle lighting effects and characteristics of volumetric data and materials. For many volumes, homogeneous regions pose problems for typical gradient-based surface shading. Many common objects and natural phenomena exhibit visual quality that cannot be captured using simple lighting models or cannot be solved at interactive rates using more sophisticated methods. We present a simple yet effective interactive shading model which captures volumetric light attenuation effects that incorporates volumetric shadows, an approximation to phase functions, an approximation to forward scattering, and chromatic attenuation that provides the subtle appearance of translucency. We also present a technique for volume displacement or perturbation that allows realistic interactive modeling of high frequency detail for both real and synthetic volumetric data

    BxDF material acquisition, representation, and rendering for VR and design

    Get PDF
    Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use. In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering
    corecore