8,147 research outputs found

    Wind projection basis for real-time animation of trees

    Get PDF
    International audienceThis scientific report presents a real-time method to animate complex scenes of thousands of trees under a user-controllable wind load. Firstly, modal analysis is applied to extract the main modes of deformation from the mechanical model of a 3D tree. The novelty of our contribution is to precompute a new basis of the modal stress of the tree under wind load. At runtime, this basis allows to replace the modal projection of the external forces by a direct mapping for any directional wind. We show that this approach can be efficiently implemented on a graphics hardware. This modal animation can be simulated at low computation cost even for large scenes containing thousands of trees

    Applications of computer-graphics animation for motion-perception research

    Get PDF
    The advantages and limitations of using computer animated stimuli in studying motion perception are presented and discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer generated displays present simplified approximations of the dynamics in natural events. Very little is known about how the differences between natural events and computer simulations influence perceptual processing. In practice, the differences are assumed to be irrelevant to the questions under study, and that findings with computer generated stimuli will generalize to natural events

    Real-time rendering and simulation of trees and snow

    Get PDF
    Tree models created by an industry used package are exported and the structure extracted in order to procedurally regenerate the geometric mesh, addressing the limitations of the application's standard output. The structure, once extracted, is used to fully generate a high quality skeleton for the tree, individually representing each section in every branch to give the greatest achievable level of freedom of deformation and animation. Around the generated skeleton, a new geometric mesh is wrapped using a single, continuous surface resulting in the removal of intersection based render artefacts. Surface smoothing and enhanced detail is added to the model dynamically using the GPU enhanced tessellation engine. A real-time snow accumulation system is developed to generate snow cover on a dynamic, animated scene. Occlusion techniques are used to project snow accumulating faces and map exposed areas to applied accumulation maps in the form of dynamic textures. Accumulation maps are xed to applied surfaces, allowing moving objects to maintain accumulated snow cover. Mesh generation is performed dynamically during the rendering pass using surface o�setting and tessellation to enhance required detail

    Modeling and generating moving trees from video

    Get PDF
    We present a probabilistic approach for the automatic production of tree models with convincing 3D appearance and motion. The only input is a video of a moving tree that provides us an initial dynamic tree model, which is used to generate new individual trees of the same type. Our approach combines global and local constraints to construct a dynamic 3D tree model from a 2D skeleton. Our modeling takes into account factors such as the shape of branches, the overall shape of the tree, and physically plausible motion. Furthermore, we provide a generative model that creates multiple trees in 3D, given a single example model. This means that users no longer have to make each tree individually, or specify rules to make new trees. Results with different species are presented and compared to both reference input data and state of the art alternatives

    Generative Image Dynamics

    Full text link
    We present an approach to modeling an image-space prior on scene dynamics. Our prior is learned from a collection of motion trajectories extracted from real video sequences containing natural, oscillating motion such as trees, flowers, candles, and clothes blowing in the wind. Given a single image, our trained model uses a frequency-coordinated diffusion sampling process to predict a per-pixel long-term motion representation in the Fourier domain, which we call a neural stochastic motion texture. This representation can be converted into dense motion trajectories that span an entire video. Along with an image-based rendering module, these trajectories can be used for a number of downstream applications, such as turning still images into seamlessly looping dynamic videos, or allowing users to realistically interact with objects in real pictures.Comment: Project website: http://generative-dynamics.github.i

    Interactive Vegetation Rendering with Slicing and Blending

    Get PDF
    Detailed and interactive 3D rendering of vegetation is one of the challenges of traditional polygon-oriented computer graphics, due to large geometric complexity even of simple plants. In this paper we introduce a simplified image-based rendering approach based solely on alpha-blended textured polygons. The simplification is based on the limitations of human perception of complex geometry. Our approach renders dozens of detailed trees in real-time with off-the-shelf hardware, while providing significantly improved image quality over existing real-time techniques. The method is based on using ordinary mesh-based rendering for the solid parts of a tree, its trunk and limbs. The sparse parts of a tree, its twigs and leaves, are instead represented with a set of slices, an image-based representation. A slice is a planar layer, represented with an ordinary alpha or color-keyed texture; a set of parallel slices is a slicing. Rendering from an arbitrary viewpoint in a 360 degree circle around the center of a tree is achieved by blending between the nearest two slicings. In our implementation, only 6 slicings with 5 slices each are sufficient to visualize a tree for a moving or stationary observer with the perceptually similar quality as the original model

    Leave blank (2009/2010)

    Get PDF
    Leave blank (2009/2010

    Real-time Realistic Rendering Of Nature Scenes With Dynamic Lighting

    Get PDF
    Rendering of natural scenes has interested the scientific community for a long time due to its numerous applications. The targeted goal is to create images that are similar to what a viewer can see in real life with his/her eyes. The main obstacle is complexity: nature scenes from real life contain a huge number of small details that are hard to model, take a lot of time to render and require a huge amount of memory unavailable in current computers. This complexity mainly comes from geometry and lighting. The goal of our research is to overcome this complexity and to achieve real-time rendering of nature scenes while providing visually convincing dynamic global illumination. Our work focuses on grass and trees as they are commonly visible in everyday life. We handle geometry and lighting complexities for grass to render millions of grass blades interactively with dynamic lighting. As for lighting complexity, we address real-time rendering of trees by proposing a lighting model that handles indirect lighting. Our work makes extensive use of the current generation of Graphics Processing Units (GPUs) to meet the real-time requirement and to leave the CPU free to carry out other tasks
    • …
    corecore