61 research outputs found

    Volumetric cloud generation using a Chinese brush calligraphy style

    Get PDF
    Includes bibliographical references.Clouds are an important feature of any real or simulated environment in which the sky is visible. Their amorphous, ever-changing and illuminated features make the sky vivid and beautiful. However, these features increase both the complexity of real time rendering and modelling. It is difficult to design and build volumetric clouds in an easy and intuitive way, particularly if the interface is intended for artists rather than programmers. We propose a novel modelling system motivated by an ancient painting style, Chinese Landscape Painting, to address this problem. With the use of only one brush and one colour, an artist can paint a vivid and detailed landscape efficiently. In this research, we develop three emulations of a Chinese brush: a skeleton-based brush, a 2D texture footprint and a dynamic 3D footprint, all driven by the motion and pressure of a stylus pen. We propose a hybrid mapping to generate both the body and surface of volumetric clouds from the brush footprints. Our interface integrates these components along with 3D canvas control and GPU-based volumetric rendering into an interactive cloud modelling system. Our cloud modelling system is able to create various types of clouds occurring in nature. User tests indicate that our brush calligraphy approach is preferred to conventional volumetric cloud modelling and that it produces convincing 3D cloud formations in an intuitive and interactive fashion. While traditional modelling systems focus on surface generation of 3D objects, our brush calligraphy technique constructs the interior structure. This forms the basis of a new modelling style for objects with amorphous shape

    Realistic simulation and animation of clouds using SkewT-LogP diagrams

    Get PDF
    Nuvens e clima são tópicos importantes em computação gráfica, nomeadamente na simulação e animação de fenómenos naturais. Tal deve-se ao facto de a simulação de fenómenos naturais−onde as nuvens estão incluídas−encontrar aplicações em filmes, jogos e simuladores de voo. Contudo, as técnicas existentes em computação gráfica apenas permitem representações de nuvens simplificadas, tornadas possíveis através de dinâmicas fictícias que imitam a realidade. O problema que este trabalho pretende abordar prende-se com a simulação de nuvens adequadas para utilização em ambientes virtuais, isto é, nuvens com dinâmica baseada em física que variam ao longo do tempo. Em meteorologia é comum usar técnicas de simulação de nuvens baseadas em leis da física, contudoossistemasatmosféricosdeprediçãonuméricasãocomputacionalmente pesados e normalmente possuem maior precisão numérica do que o necessário em computação gráfica. Neste campo, torna-se necessário direcionar e ajustar as características físicas ou contornar a realidade de modo a atingir os objetivos artísticos, sendo um fator fundamental que faz com que a computação gráfica se distinga das ciências físicas. Contudo, simulações puramente baseadas em física geram soluções de acordo com regras predefinidas e tornam-se notoriamente difíceis de controlar. De modo a enfrentar esses desafios desenvolvemos um novo método de simulação de nuvens baseado em física que possui a característica de ser computacionalmente leve e simula as propriedades dinâmicas relacionadas com a formação de nuvens. Este novo modelo evita resolver as equações físicas, ao apresentar uma solução explícita para essas equações através de diagramas termodinâmicos SkewT/LogP. O sistema incorpora dados reais de forma a simular os parâmetros necessários para a formação de nuvens. É especialmente adequado para a simulação de nuvens cumulus que se formam devido ao um processo convectivo. Esta abordagem permite não só reduzir os custos computacionais de métodos baseados em física, mas também fornece a possibilidade de controlar a forma e dinâmica de nuvens através do controlo dos níveis atmosféricos existentes no diagrama SkewT/LogP. Nestatese,abordámostambémumoutrodesafio,queestárelacionadocomasimulação de nuvens orográficas. Do nosso conhecimento, esta é a primeira tentativa de simular a formação deste tipo de nuvens. A novidade deste método reside no fato de este tipo de nuvens serem não convectivas, oque se traduz nocálculodeoutrosníveis atmosféricos. Além disso, atendendo a que este tipo de nuvens se forma sobre montanhas, é também apresentadoumalgoritmoparadeterminarainfluênciadamontanhasobreomovimento da nuvem. Em resumo, esta dissertação apresenta um conjunto de algoritmos para a modelação e simulação de nuvens cumulus e orográficas, recorrendo a diagramas termodinâmicos SkewT/LogP pela primeira vez no campo da computação gráfica.Clouds and weather are important topics in computer graphics, in particular in the simulation and animation of natural phenomena. This is so because simulation of natural phenomena−where clouds are included−find applications in movies, games and flight simulators. However, existing techniques in computer graphics only offer the simplified cloud representations, possibly with fake dynamics that mimic the reality. The problem that this work addresses is how to find realistic simulation of cloud formation and evolution, that are suitable for virtual environments, i.e., clouds with physically-based dynamics over time. It happens that techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems based on physics laws are computationally expensive and provide more numerical accuracy than the required accuracy in computer graphics. In computer graphics, we often need to direct and adjust physical features, or even to bend the reality, to meet artistic goals, which is a key factor that makes computer graphics distinct from physical sciences. However, pure physically-based simulations evolve their solutions according to pre-set physics rules that are notoriously difficult to control. In order to face these challenges we have developed a new lightweight physically-based cloudsimulationschemethatsimulatesthedynamicpropertiesofcloudformation. This new model avoids solving the physically-based equations typically used to simulate the formation of clouds by explicitly solving these equations using SkewT/LogP thermodynamic diagrams. The system incorporates a weather model that uses real data to simulate parameters related to cloud formation. This is specially suitable to the simulation of cumulus clouds, which result from a convective process. This approach not only reduces the computational costs of previous physically-based methods, but also provides a technique to control the shape and dynamics of clouds by handling the cloud levels in SkewT/LogP diagrams. In this thesis, we have also tackled a new challenge, which is related to the simulation oforographic clouds. From ourknowledge, this isthefirstattempttosimulatethis type of cloud formation. The novelty in this method relates to the fact that these clouds are non-convective, so that different atmospheric levels have to be determined. Moreover, since orographic clouds form over mountains, we have also to determine the mountain influence in the cloud motion. In summary, this thesis presents a set of algorithms for the modelling and simulation of cumulus and orographic clouds, taking advantage of the SkewT/LogP diagrams for the first time in the field of computer graphics

    Real-time rendering and simulation of trees and snow

    Get PDF
    Tree models created by an industry used package are exported and the structure extracted in order to procedurally regenerate the geometric mesh, addressing the limitations of the application's standard output. The structure, once extracted, is used to fully generate a high quality skeleton for the tree, individually representing each section in every branch to give the greatest achievable level of freedom of deformation and animation. Around the generated skeleton, a new geometric mesh is wrapped using a single, continuous surface resulting in the removal of intersection based render artefacts. Surface smoothing and enhanced detail is added to the model dynamically using the GPU enhanced tessellation engine. A real-time snow accumulation system is developed to generate snow cover on a dynamic, animated scene. Occlusion techniques are used to project snow accumulating faces and map exposed areas to applied accumulation maps in the form of dynamic textures. Accumulation maps are xed to applied surfaces, allowing moving objects to maintain accumulated snow cover. Mesh generation is performed dynamically during the rendering pass using surface o�setting and tessellation to enhance required detail

    Implicit surfaces for interactive animated characters

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1999.Includes bibliographical references (leaves 64-68).Implicit surface modeling in computer graphics is a powerful technique for representing smooth and organic shapes. Skeletal elements of an implicit surface blend to create a smooth, seamless skin which exhibits desired properties for animation such as squash and stretch. Because of their high computational cost to render, implicit surfaces have not been used extensively in the real-time graphics domain. This thesis discusses the problems and some solutions in the application of implicit surfaces to the domain of interactive character animation. A design process for an implicit surface-based character is proposed, from the modeling and texturing stages to animation and rendering.by Kenneth Bradley Russell.S.M

    Intuitive visualization of surface properties of biomolecules

    Get PDF
    In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as Electrostatic Potential (EP) and hydropathy expressed as Molecular Lipophilic Potential (MLP). The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by phisicochemical programs and visualized as range of colours that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of colour to encode both characteristics makes the simultaneous visualization almost impossible. This is why most of the times EP and MLP are presented in two different images. In this thesis, we describe a novel and intuitive code for the simultaneous visualization of these properties. For our purpose we use Blender, an open-source, free, cross-platform 3D application used for modelling, animation, gaming and rendering. On the basis of Blender, we developed BioBlender, a package dedicated to biological work: elaboration of proteins motion with the simultaneous visualization of their chemical and physical features. Blender's Game Engine, equipped with specific physico-chemical rules is used to elaborate the motion of proteins, interpolating between different conformations (NMR collections or different X-rays of the same protein). We obtain a physically plausible sequence of intermediate conformations which are the basis for the subsequent visual elaboration. A new visual code is introduced for MLP visualization: a range of optical features that goes from dull-rough surfaces for the most hydrophilic areas to shiny-smooth surfaces for the most lipophilic ones. This kind of representation permits a photorealistic rendering of the smooth spatial distribution of the values of MLP on the surface of the protein. EP is represented as animated line particles that flow along field lines, from positive to negative, proportional to the total charge of the protein. Our system permits EP and MLP simultaneous visualization of molecules and, in the case of moving proteins, the continuous perception of these features, calculated for each intermediate conformation. Moreover, this representation contributes to gain insight into the molecules function by drawing viewer's attention to the most active regions of the protein

    Modeling and real-time rendering of participating media using the GPU

    Get PDF
    Cette thèse traite de la modélisation, l'illumination et le rendu temps-réel de milieux participants à l'aide du GPU. Dans une première partie, nous commençons par développer une méthode de rendu de nappes de brouillard hétérogènes pour des scènes en extérieur. Le brouillard est modélisé horizontalement dans une base 2D de fonctions de Haar ou de fonctions B-Spline linéaires ou quadratiques, dont les coefficients peuvent être chargés depuis une textit{fogmap}, soit une carte de densité en niveaux de gris. Afin de donner au brouillard son épaisseur verticale, celui-ci est doté d'un coefficient d'atténuation en fonction de l'altitude, utilisé pour paramétrer la rapidité avec laquelle la densité diminue avec la distance au milieu selon l'axe Y. Afin de préparer le rendu temps-réel, nous appliquons une transformée en ondelettes sur la carte de densité du brouillard, afin d'en extraire une approximation grossière (base de fonctions B-Spline) et une série de couches de détails (bases d'ondelettes B-Spline), classés par fréquence.%Les détails sont ainsi classés selon leur fréquence et, additionnées, permettent de retrouver la carte de densité d'origine. Chacune de ces bases de fonctions 2D s'apparente à une grille de coefficients. Lors du rendu sur GPU, chacune de ces grilles est traversée pas à pas, case par case, depuis l'observateur jusqu'à la plus proche surface solide. Grâce à notre séparation des différentes fréquences de détails lors des pré-calculs, nous pouvons optimiser le rendu en ne visualisant que les détails les plus contributifs visuellement en avortant notre parcours de grille à une distance variable selon la fréquence. Nous présentons ensuite d'autres travaux concernant ce même type de brouillard : l'utilisation de la transformée en ondelettes pour représenter sa densité via une grille non-uniforme, la génération automatique de cartes de densité et son animation à base de fractales, et enfin un début d'illumination temps-réel du brouillard en simple diffusion. Dans une seconde partie, nous nous intéressons à la modélisation, l'illumination en simple diffusion et au rendu temps-réel de fumée (sans simulation physique) sur GPU. Notre méthode s'inspire des Light Propagation Volumes (volume de propagation de lumière), une technique à l'origine uniquement destinée à la propagation de la lumière indirecte de manière complètement diffuse, après un premier rebond sur la géométrie. Nous l'adaptons pour l'éclairage direct, et l'illumination des surfaces et milieux participants en simple diffusion. Le milieu est fourni sous forme d'un ensemble de bases radiales (blobs), puis est transformé en un ensemble de voxels, ainsi que les surfaces solides, de manière à disposer d'une représentation commune. Par analogie aux LPV, nous introduisons un Occlusion Propagation Volume, dont nous nous servons, pour calculer l'intégrale de la densité optique entre chaque source et chaque autre cellule contenant soit un voxel du milieu, soit un voxel issu d'une surface. Cette étape est intégrée à la boucle de rendu, ce qui permet d'animer le milieu participant ainsi que les sources de lumière sans contrainte particulière. Nous simulons tous types d'ombres : dues au milieu ou aux surfaces, projetées sur le milieu ou les surfacesThis thesis deals with modeling, illuminating and rendering participating media in real-time using graphics hardware. In a first part, we begin by developing a method to render heterogeneous layers of fog for outdoor scenes. The medium is modeled horizontally using a 2D Haar or linear/quadratic B-Spline function basis, whose coefficients can be loaded from a fogmap, i.e. a grayscale density image. In order to give to the fog its vertical thickness, it is provided with a coefficient parameterizing the extinction of the density when the altitude to the fog increases. To prepare the rendering step, we apply a wavelet transform on the fog's density map, and extract a coarse approximation and a series of layers of details (B-Spline wavelet bases).These details are ordered according to their frequency and, when summed back together, can reconstitute the original density map. Each of these 2D function basis can be viewed as a grid of coefficients. At the rendering step on the GPU, each of these grids is traversed step by step, cell by cell, since the viewer's position to the nearest solid surface. Thanks to our separation of the different frequencies of details at the precomputations step, we can optimize the rendering by only visualizing details that contribute most to the final image and abort our grid traversal at a distance depending on the grid's frequency. We then present other works dealing with the same type of fog: the use of the wavelet transform to represent the fog's density in a non-uniform grid, the automatic generation of density maps and their animation based on Julia fractals, and finally a beginning of single-scattering illumination of the fog, where we are able to simulate shadows by the medium and the geometry. In a second time, we deal with modeling, illuminating and rendering full 3D single-scattering sampled media such as smoke (without physical simulation) on the GPU. Our method is inspired by light propagation volumes, a technique whose only purpose was, at the beginning, to propagate fully diffuse indirect lighting. We adapt it to direct lighting, and the illumination of both surfaces and participating media. The medium is provided under the form of a set of radial bases (blobs), and is then transformed into a set of voxels, together with solid surfaces, so that both entities can be manipulated more easily under a common form. By analogy to the LPV, we introduce an occlusion propagation volume, which we use to compute the integral of the optical density, between each source and each other cell containing a voxel either generated from the medium, or from a surface. This step is integrated into the rendering process, which allows to animate participating media and light sources without any further constraintPARIS-EST-Université (770839901) / SudocSudocFranceF

    Reliability analysis of discrete-state performance functions via adaptive sequential sampling with detection of failure surfaces

    Full text link
    The paper presents a new efficient and robust method for rare event probability estimation for computational models of an engineering product or a process returning categorical information only, for example, either success or failure. For such models, most of the methods designed for the estimation of failure probability, which use the numerical value of the outcome to compute gradients or to estimate the proximity to the failure surface, cannot be applied. Even if the performance function provides more than just binary output, the state of the system may be a non-smooth or even a discontinuous function defined in the domain of continuous input variables. In these cases, the classical gradient-based methods usually fail. We propose a simple yet efficient algorithm, which performs a sequential adaptive selection of points from the input domain of random variables to extend and refine a simple distance-based surrogate model. Two different tasks can be accomplished at any stage of sequential sampling: (i) estimation of the failure probability, and (ii) selection of the best possible candidate for the subsequent model evaluation if further improvement is necessary. The proposed criterion for selecting the next point for model evaluation maximizes the expected probability classified by using the candidate. Therefore, the perfect balance between global exploration and local exploitation is maintained automatically. The method can estimate the probabilities of multiple failure types. Moreover, when the numerical value of model evaluation can be used to build a smooth surrogate, the algorithm can accommodate this information to increase the accuracy of the estimated probabilities. Lastly, we define a new simple yet general geometrical measure of the global sensitivity of the rare-event probability to individual variables, which is obtained as a by-product of the proposed algorithm.Comment: Manuscript CMAME-D-22-00532R1 (Computer Methods in Applied Mechanics and Engineering

    Virtual human modelling and animation for real-time sign language visualisation

    Get PDF
    >Magister Scientiae - MScThis thesis investigates the modelling and animation of virtual humans for real-time sign language visualisation. Sign languages are fully developed natural languages used by Deaf communities all over the world. These languages are communicated in a visual-gestural modality by the use of manual and non-manual gestures and are completely di erent from spoken languages. Manual gestures include the use of hand shapes, hand movements, hand locations and orientations of the palm in space. Non-manual gestures include the use of facial expressions, eye-gazes, head and upper body movements. Both manual and nonmanual gestures must be performed for sign languages to be correctly understood and interpreted. To e ectively visualise sign languages, a virtual human system must have models of adequate quality and be able to perform both manual and non-manual gesture animations in real-time. Our goal was to develop a methodology and establish an open framework by using various standards and open technologies to model and animate virtual humans of adequate quality to e ectively visualise sign languages. This open framework is to be used in a Machine Translation system that translates from a verbal language such as English to any sign language. Standards and technologies we employed include H-Anim, MakeHuman, Blender, Python and SignWriting. We found it necessary to adapt and extend H-Anim to e ectively visualise sign languages. The adaptations and extensions we made to H-Anim include imposing joint rotational limits, developing exible hands and the addition of facial bones based on the MPEG-4 Facial De nition Parameters facial feature points for facial animation. By using these standards and technologies, we found that we could circumvent a few di cult problems, such as: modelling high quality virtual humans; adapting and extending H-Anim; creating a sign language animation action vocabulary; blending between animations in an action vocabulary; sharing animation action data between our virtual humans; and e ectively visualising South African Sign Language.South Afric

    Interactive Rendering of Scattering and Refraction Effects in Heterogeneous Media

    Get PDF
    In this dissertation we investigate the problem of interactive and real-time visualization of single scattering, multiple scattering and refraction effects in heterogeneous volumes. Our proposed solutions span a variety of use scenarios: from a very fast yet physically-based approximation to a physically accurate simulation of microscopic light transmission. We add to the state of the art by introducing a novel precomputation and sampling strategy, a system for efficiently parallelizing the computation of different volumetric effects, and a new and fast version of the Discrete Ordinates Method. Finally, we also present a collateral work on real-time 3D acquisition devices
    corecore