5,275 research outputs found

    Creating landscapes with simulated colliding plates

    Get PDF
    The creation of realistic virtual terrain has been a longstanding computer graphics problem, as terrain will form the backdrop of any virtual world. Approaches to this problem to date have taken one of two approaches: either fractally generating landscapes, or simulating the processes of water and thermal erosion. I have developed a new method to synthesize virtual landscapes, by simulating some of the geological forces that create real-world landscapes I model the collision and deformation of simulated tectonic plates, and create features that mimic those found along real-world plate boundaries. This is achieved through the use of a meshless object representation subjected to physically-based forces, using existing techniques for accurately modeling stress and strain in solid objects

    Generation and Rendering of Interactive Ground Vegetation for Real-Time Testing and Validation of Computer Vision Algorithms

    Get PDF
    During the development process of new algorithms for computer vision applications, testing and evaluation in real outdoor environments is time-consuming and often difficult to realize. Thus, the use of artificial testing environments is a flexible and cost-efficient alternative. As a result, the development of new techniques for simulating natural, dynamic environments is essential for real-time virtual reality applications, which are commonly known as Virtual Testbeds. Since the first basic usage of Virtual Testbeds several years ago, the image quality of virtual environments has almost reached a level close to photorealism even in real-time due to new rendering approaches and increasing processing power of current graphics hardware. Because of that, Virtual Testbeds can recently be applied in application areas like computer vision, that strongly rely on realistic scene representations. The realistic rendering of natural outdoor scenes has become increasingly important in many application areas, but computer simulated scenes often differ considerably from real-world environments, especially regarding interactive ground vegetation. In this article, we introduce a novel ground vegetation rendering approach, that is capable of generating large scenes with realistic appearance and excellent performance. Our approach features wind animation, as well as object-to-grass interaction and delivers realistically appearing grass and shrubs at all distances and from all viewing angles. This greatly improves immersion, as well as acceptance, especially in virtual training applications. Nevertheless, the rendered results also fulfill important requirements for the computer vision aspect, like plausible geometry representation of the vegetation, as well as its consistence during the entire simulation. Feature detection and matching algorithms are applied to our approach in localization scenarios of mobile robots in natural outdoor environments. We will show how the quality of computer vision algorithms is influenced by highly detailed, dynamic environments, like observed in unstructured, real-world outdoor scenes with wind and object-to-vegetation interaction

    Interactive Vegetation Rendering with Slicing and Blending

    Get PDF
    Detailed and interactive 3D rendering of vegetation is one of the challenges of traditional polygon-oriented computer graphics, due to large geometric complexity even of simple plants. In this paper we introduce a simplified image-based rendering approach based solely on alpha-blended textured polygons. The simplification is based on the limitations of human perception of complex geometry. Our approach renders dozens of detailed trees in real-time with off-the-shelf hardware, while providing significantly improved image quality over existing real-time techniques. The method is based on using ordinary mesh-based rendering for the solid parts of a tree, its trunk and limbs. The sparse parts of a tree, its twigs and leaves, are instead represented with a set of slices, an image-based representation. A slice is a planar layer, represented with an ordinary alpha or color-keyed texture; a set of parallel slices is a slicing. Rendering from an arbitrary viewpoint in a 360 degree circle around the center of a tree is achieved by blending between the nearest two slicings. In our implementation, only 6 slicings with 5 slices each are sufficient to visualize a tree for a moving or stationary observer with the perceptually similar quality as the original model

    Using Different Data Sources for New Findings in Visualization of Highly Detailed Urban Data

    Get PDF
    Measurement of infrastructure has highly evolved in the last years. Scanning systems became more precise and many methods were found to add and improve content created for the analysis of buildings and landscapes. Therefore the pure amount of data increased significantly and new algorithms had to be found to visualize these data for further exploration. Additionally many data types and formats originate from different sources, such as Dibits hybrid scanning systems delivering laser-scanned point clouds and photogrammetric texture images. These are usually analyzed separately. Combinations of different types of data are not widely used but might lead to new findings and improved data exploration. In our work we use different data formats like meshes, unprocessed point clouds and polylines in tunnel visualization to give experts a tool to explore existing datasets in depth with a wide variety of possibilities. The diverse creation of datasets leads to new challenges for preprocessing, out-of-core rendering and efficient fusion of this varying information. Interactive analysis of different formats of data also has to have several approaches and is usually difficult to merge into one application. In this paper we describe the challenges and advantages of the combination of different data sources in tunnel visualization. Large meshes with high resolution textures are merged with dense point clouds and additional measurements. Interactive analysis can also create additional information, which has to be integrated precisely to prevent errors and misinterpretation. We present the basic algorithms used for heterogeneous data formats, how we combined them and what advantages are created by our methods. Several datasets evolve over time. This dynamic is also considered in our visualization and analysis methods to enable change detection. For tunnel monitoring this allows to investigate the entire history of the construction project and helps to make better informed decisions in the preceding construction phases or for repairs. Several methods are merged like the data they are based on enabling new ways of data exploration. In analyzing this new approach to look at heterogeneous datasets we come to the conclusion that the combination of different sources leads to a better solution than the sum of its parts

    Combining Procedural and Hand Modeling Techniques for Creating Animated Digital 3D Natural Environments

    Get PDF
    This thesis focuses on a systematic solution for rendering 3D photorealistic natural environments using Maya\u27s procedural methods and ZBrush. The methods used in this thesis started with comparing two industry specific procedural applications, Vue and Maya\u27s Paint Effects, to determine which is better suited for applying animated procedural effects with the highest level of fidelity and expandability. Generated objects from Paint Effects contained the highest potential through object attributes, texturing and lighting. To optimize results further, compatibility with sculpting programs such as ZBrush are required to sculpt higher levels of detail. The final combination workflow produces results used in the short film Fall. The need for producing these effects is attributed to the growth of the visual effect industry\u27s ability to deliver realistic simulated complexities of nature and as such, the public\u27s insatiable need to see them on screen. Usually, however, the requirements for delivering a photorealistic digital environment fall under tight deadlines due to various phases of the visual effects project being interconnected across multiple production houses, thereby requiring the need for effective methods to deliver a high-end visual presentation. The use of a procedural system, such as an L-system, is often an initial step within a workflow leading toward creating photorealistic vegetation for visual effects environments. Procedure-based systems, such as Maya\u27s Paint Effects, feature robust controls that can generate many natural objects. A balance is thus created between being able to model objects quickly, but with limited detail, and control. Other methods outside this system must be used to achieve higher levels of fidelity through the use of attributes, expressions, lighting and texturing. Utilizing the procedural engine within Maya\u27s Paint Effects allows the beginning stages of modeling a 3D natural environment. ZBrush\u27s manual system approach can further bring the aesthetics to a much finer degree of fidelity. The benefit in leveraging both types of systems results in photorealistic objects that preserve all of the procedural and dynamic forces specified within the Paint Effects procedural engine

    Implementation of computer visualisation in UK planning

    Get PDF
    PhD ThesisWithin the processes of public consultation and development management, planners are required to consider spatial information, appreciate spatial transformations and future scenarios. In the past, conventional media such as maps, plans, illustrations, sections, and physical models have been used. Those traditional visualisations are at a high degree of abstraction, sometimes difficult to understand for lay people and inflexible in terms of the range of scenarios which can be considered. Yet due to technical advances and falling costs, the potential for computer based visualisation has much improved and has been increasingly adopted within the planning process. Despite the growth in this field, insufficient consideration has been given to the possible weakness of computerised visualisations. Reflecting this lack of research, this study critically evaluates the use and potential of computerised visualisation within this process. The research is divided into two components: case study analysis and reflections of the author following his involvement within the design and use of visualisations in a series of planning applications; and in-depth interviews with experienced practitioners in the field. Based on a critical review of existing literature, this research explores in particular the issues of credibility, realism and costs of production. The research findings illustrate the importance of the credibility of visualisations, a topic given insufficient consideration within the academic literature. Whereas the realism of visualisations has been the focus of much previous research, the results of the case studies and interviews with practitioners undertaken in this research suggest a ‘photo’ realistic level of details may not be required as long as the observer considers the visualisations to be a credible reflection of the underlying reality. Although visualisations will always be a simplification of reality and their level of realism is subjective, there is still potential for developing guidelines or protocols for image production based on commonly agreed standards. In the absence of such guidelines there is a danger that scepticism in the credibility of computer visualisations will prevent the approach being used to its full potential. These findings suggest there needs to be a balance between scientific protocols and artistic licence in the production of computer visualisation. In order to be sufficiently credible for use in decision making within the planning processes, the production of computer visualisation needs to follow a clear methodology and scientific protocols set out in good practice guidance published by professional bodies and governmental organisations.Newcastle upon Tyne for awarding me an International Scholarship and Alumni Bursar

    Procedural Generation and Rendering of Realistic, Navigable Forest Environments: An Open-Source Tool

    Full text link
    Simulation of forest environments has applications from entertainment and art creation to commercial and scientific modelling. Due to the unique features and lighting in forests, a forest-specific simulator is desirable, however many current forest simulators are proprietary or highly tailored to a particular application. Here we review several areas of procedural generation and rendering specific to forest generation, and utilise this to create a generalised, open-source tool for generating and rendering interactive, realistic forest scenes. The system uses specialised L-systems to generate trees which are distributed using an ecosystem simulation algorithm. The resulting scene is rendered using a deferred rendering pipeline, a Blinn-Phong lighting model with real-time leaf transparency and post-processing lighting effects. The result is a system that achieves a balance between high natural realism and visual appeal, suitable for tasks including training computer vision algorithms for autonomous robots and visual media generation.Comment: 14 pages, 11 figures. Submitted to Computer Graphics Forum (CGF). The application and supporting configuration files can be found at https://github.com/callumnewlands/ForestGenerato
    corecore