12,293 research outputs found

    Single-picture reconstruction and rendering of trees for plausible vegetation synthesis

    Get PDF
    State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.Peer ReviewedPostprint (author's final draft

    Image-based tree variations

    Get PDF
    The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette-defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.Peer ReviewedPostprint (author's final draft

    Combining Procedural and Hand Modeling Techniques for Creating Animated Digital 3D Natural Environments

    Get PDF
    This thesis focuses on a systematic solution for rendering 3D photorealistic natural environments using Maya\u27s procedural methods and ZBrush. The methods used in this thesis started with comparing two industry specific procedural applications, Vue and Maya\u27s Paint Effects, to determine which is better suited for applying animated procedural effects with the highest level of fidelity and expandability. Generated objects from Paint Effects contained the highest potential through object attributes, texturing and lighting. To optimize results further, compatibility with sculpting programs such as ZBrush are required to sculpt higher levels of detail. The final combination workflow produces results used in the short film Fall. The need for producing these effects is attributed to the growth of the visual effect industry\u27s ability to deliver realistic simulated complexities of nature and as such, the public\u27s insatiable need to see them on screen. Usually, however, the requirements for delivering a photorealistic digital environment fall under tight deadlines due to various phases of the visual effects project being interconnected across multiple production houses, thereby requiring the need for effective methods to deliver a high-end visual presentation. The use of a procedural system, such as an L-system, is often an initial step within a workflow leading toward creating photorealistic vegetation for visual effects environments. Procedure-based systems, such as Maya\u27s Paint Effects, feature robust controls that can generate many natural objects. A balance is thus created between being able to model objects quickly, but with limited detail, and control. Other methods outside this system must be used to achieve higher levels of fidelity through the use of attributes, expressions, lighting and texturing. Utilizing the procedural engine within Maya\u27s Paint Effects allows the beginning stages of modeling a 3D natural environment. ZBrush\u27s manual system approach can further bring the aesthetics to a much finer degree of fidelity. The benefit in leveraging both types of systems results in photorealistic objects that preserve all of the procedural and dynamic forces specified within the Paint Effects procedural engine

    TreeSketchNet: From Sketch To 3D Tree Parameters Generation

    Full text link
    3D modeling of non-linear objects from stylized sketches is a challenge even for experts in Computer Graphics (CG). The extrapolation of objects parameters from a stylized sketch is a very complex and cumbersome task. In the present study, we propose a broker system that mediates between the modeler and the 3D modelling software and can transform a stylized sketch of a tree into a complete 3D model. The input sketches do not need to be accurate or detailed, and only need to represent a rudimentary outline of the tree that the modeler wishes to 3D-model. Our approach is based on a well-defined Deep Neural Network (DNN) architecture, we called TreeSketchNet (TSN), based on convolutions and able to generate Weber and Penn parameters that can be interpreted by the modelling software to generate a 3D model of a tree starting from a simple sketch. The training dataset consists of Synthetically-Generated \revision{(SG)} sketches that are associated with Weber-Penn parameters generated by a dedicated Blender modelling software add-on. The accuracy of the proposed method is demonstrated by testing the TSN with both synthetic and hand-made sketches. Finally, we provide a qualitative analysis of our results, by evaluating the coherence of the predicted parameters with several distinguishing features

    Realistic reconstruction and rendering of detailed 3D scenarios from multiple data sources

    Get PDF
    During the last years, we have witnessed significant improvements in digital terrain modeling, mainly through photogrammetric techniques based on satellite and aerial photography, as well as laser scanning. These techniques allow the creation of Digital Elevation Models (DEM) and Digital Surface Models (DSM) that can be streamed over the network and explored through virtual globe applications like Google Earth or NASA WorldWind. The resolution of these 3D scenes has improved noticeably in the last years, reaching in some urban areas resolutions up to 1m or less for DEM and buildings, and less than 10 cm per pixel in the associated aerial imagery. However, in rural, forest or mountainous areas, the typical resolution for elevation datasets ranges between 5 and 30 meters, and typical resolution of corresponding aerial photographs ranges between 25 cm to 1 m. This current level of detail is only sufficient for aerial points of view, but as the viewpoint approaches the surface the terrain loses its realistic appearance. One approach to augment the detail on top of currently available datasets is adding synthetic details in a plausible manner, i.e. including elements that match the features perceived in the aerial view. By combining the real dataset with the instancing of models on the terrain and other procedural detail techniques, the effective resolution can potentially become arbitrary. There are several applications that do not need an exact reproduction of the real elements but would greatly benefit from plausibly enhanced terrain models: videogames and entertainment applications, visual impact assessment (e.g. how a new ski resort would look), virtual tourism, simulations, etc. In this thesis we propose new methods and tools to help the reconstruction and synthesis of high-resolution terrain scenes from currently available data sources, in order to achieve realistically looking ground-level views. In particular, we decided to focus on rural scenarios, mountains and forest areas. Our main goal is the combination of plausible synthetic elements and procedural detail with publicly available real data to create detailed 3D scenes from existing locations. Our research has focused on the following contributions: - An efficient pipeline for aerial imagery segmentation - Plausible terrain enhancement from high-resolution examples - Super-resolution of DEM by transferring details from the aerial photograph - Synthesis of arbitrary tree picture variations from a reduced set of photographs - Reconstruction of 3D tree models from a single image - A compact and efficient tree representation for real-time rendering of forest landscapesDurant els darrers anys, hem presenciat avenços significatius en el modelat digital de terrenys, principalment gràcies a tècniques fotogramètriques, basades en fotografia aèria o satèl·lit, i a escàners làser. Aquestes tècniques permeten crear Models Digitals d'Elevacions (DEM) i Models Digitals de Superfícies (DSM) que es poden retransmetre per la xarxa i ser explorats mitjançant aplicacions de globus virtuals com ara Google Earth o NASA WorldWind. La resolució d'aquestes escenes 3D ha millorat considerablement durant els darrers anys, arribant a algunes àrees urbanes a resolucions d'un metre o menys per al DEM i edificis, i fins a menys de 10 cm per píxel a les fotografies aèries associades. No obstant, en entorns rurals, boscos i zones muntanyoses, la resolució típica per a dades d'elevació es troba entre 5 i 30 metres, i per a les corresponents fotografies aèries varia entre 25 cm i 1m. Aquest nivell de detall només és suficient per a punts de vista aeris, però a mesura que ens apropem a la superfície el terreny perd tot el realisme. Una manera d'augmentar el detall dels conjunts de dades actuals és afegint a l'escena detalls sintètics de manera plausible, és a dir, incloure elements que encaixin amb les característiques que es perceben a la vista aèria. Així, combinant les dades reals amb instàncies de models sobre el terreny i altres tècniques de detall procedural, la resolució efectiva del model pot arribar a ser arbitrària. Hi ha diverses aplicacions per a les quals no cal una reproducció exacta dels elements reals, però que es beneficiarien de models de terreny augmentats de manera plausible: videojocs i aplicacions d'entreteniment, avaluació de l'impacte visual (per exemple, com es veuria una nova estació d'esquí), turisme virtual, simulacions, etc. En aquesta tesi, proposem nous mètodes i eines per ajudar a la reconstrucció i síntesi de terrenys en alta resolució partint de conjunts de dades disponibles públicament, per tal d'aconseguir vistes a nivell de terra realistes. En particular, hem decidit centrar-nos en escenes rurals, muntanyes i àrees boscoses. El nostre principal objectiu és la combinació d'elements sintètics plausibles i detall procedural amb dades reals disponibles públicament per tal de generar escenes 3D d'ubicacions existents. La nostra recerca s'ha centrat en les següents contribucions: - Un pipeline eficient per a segmentació d'imatges aèries - Millora plausible de models de terreny a partir d'exemples d’alta resolució - Super-resolució de models d'elevacions transferint-hi detalls de la fotografia aèria - Síntesis d'un nombre arbitrari de variacions d’imatges d’arbres a partir d'un conjunt reduït de fotografies - Reconstrucció de models 3D d'arbres a partir d'una única fotografia - Una representació compacta i eficient d'arbres per a navegació en temps real d'escenesPostprint (published version

    Real-time rendering and simulation of trees and snow

    Get PDF
    Tree models created by an industry used package are exported and the structure extracted in order to procedurally regenerate the geometric mesh, addressing the limitations of the application's standard output. The structure, once extracted, is used to fully generate a high quality skeleton for the tree, individually representing each section in every branch to give the greatest achievable level of freedom of deformation and animation. Around the generated skeleton, a new geometric mesh is wrapped using a single, continuous surface resulting in the removal of intersection based render artefacts. Surface smoothing and enhanced detail is added to the model dynamically using the GPU enhanced tessellation engine. A real-time snow accumulation system is developed to generate snow cover on a dynamic, animated scene. Occlusion techniques are used to project snow accumulating faces and map exposed areas to applied accumulation maps in the form of dynamic textures. Accumulation maps are xed to applied surfaces, allowing moving objects to maintain accumulated snow cover. Mesh generation is performed dynamically during the rendering pass using surface o�setting and tessellation to enhance required detail
    • …
    corecore