235 research outputs found

    Scalable Realtime Rendering and Interaction with Digital Surface Models of Landscapes and Cities

    Get PDF
    Interactive, realistic rendering of landscapes and cities differs substantially from classical terrain rendering. Due to the sheer size and detail of the data which need to be processed, realtime rendering (i.e. more than 25 images per second) is only feasible with level of detail (LOD) models. Even the design and implementation of efficient, automatic LOD generation is ambitious for such out-of-core datasets considering the large number of scales that are covered in a single view and the necessity to maintain screen-space accuracy for realistic representation. Moreover, users want to interact with the model based on semantic information which needs to be linked to the LOD model. In this thesis I present LOD schemes for the efficient rendering of 2.5d digital surface models (DSMs) and 3d point-clouds, a method for the automatic derivation of city models from raw DSMs, and an approach allowing semantic interaction with complex LOD models. The hierarchical LOD model for digital surface models is based on a quadtree of precomputed, simplified triangle mesh approximations. The rendering of the proposed model is proved to allow real-time rendering of very large and complex models with pixel-accurate details. Moreover, the necessary preprocessing is scalable and fast. For 3d point clouds, I introduce an LOD scheme based on an octree of hybrid plane-polygon representations. For each LOD, the algorithm detects planar regions in an adequately subsampled point cloud and models them as textured rectangles. The rendering of the resulting hybrid model is an order of magnitude faster than comparable point-based LOD schemes. To automatically derive a city model from a DSM, I propose a constrained mesh simplification. Apart from the geometric distance between simplified and original model, it evaluates constraints based on detected planar structures and their mutual topological relations. The resulting models are much less complex than the original DSM but still represent the characteristic building structures faithfully. Finally, I present a method to combine semantic information with complex geometric models. My approach links the semantic entities to the geometric entities on-the-fly via coarser proxy geometries which carry the semantic information. Thus, semantic information can be layered on top of complex LOD models without an explicit attribution step. All findings are supported by experimental results which demonstrate the practical applicability and efficiency of the methods

    Planet-Sized Batched Dynamic Adaptive Meshes (P-BDAM)

    Get PDF
    This paper describes an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.147-15

    Scalable Data Hiding for Online Textured 3D Terrain Visualization

    No full text
    International audienceA method for 3D scalable visualization, in a client/server environment is presented. The main idea presented in this paper is to increase the quality of 3D visualization for low bit rate transmission. All informations like texture, digital elevation model and projection systems are merged into a single file. The integration is achieved via data hiding whereas the scalability is realized through the multiresolution nature of JPEG2000 encoding. The embedding step is done in the lossless DWT domain. The strategy is flexible and it is up to the user to decide the level of transform of texture and DEM. In this context a comparison between various possibilities is presented by applying the method to a practical example. It is shown that a very good visualization can be realized with even a tiny fraction of the encoded coefficients

    LIME : Software for 3-D visualization, interpretation, and communication of virtual geoscience models

    Get PDF
    Parts of LIME have been developed to address research requirements in projects funded by the Research Council of Norway (RCN) through the Petromaks and Petromaks 2 programs. The following grants are acknowledged: 153264 (VOG [Virtual Outcrop Geology]; with Statoil ASA), 163316 (Carbonate Reservoir Geomodels [IRIS (International Research Institute of Stavanger)]), 176132 (Paleokarst Reservoirs [Uni Research CIPR]), 193059 (EUSA; with FORCE Sedimentology and Stratigraphy Group), 234152 (Trias North [University of Oslo]; with Deutsche Erdoel AG, Edison, Lundin, Statoil, and Tullow), 234111 (VOM2MPS [Uni Research CIPR]; with FORCE Sedimentology and Stratigraphy Group), as well as SkatteFUNN (RCN) project 266740. In addition, the SAFARI project consortium (http://safaridb.com) is thanked for its continued support. The OSG and wxWidgets communities are acknowledged for ongoing commitment to providing mature and powerful software libraries. All authors thank colleagues past and present for studies culminating in the presented figures: Kristine Smaadal and Aleksandra Sima (Figs. 1 and 4); Colm Pierce (Fig. 2A); Eivind Bastesen, Roy Gabrielsen and Haakon Fossen (Fig. 3); Christian Haug Eide (Fig. 7); Ivar Grunnaleite and Gunnar Sælen (Fig. 8); and Magda Chmielewska (Fig. 9). Isabelle Lecomte contributed to discussions on geospatial-geophysical data fusion. Bowei Tong and Joris Vanbiervliet are acknowledged for internal discussions during article revision. The lead author thanks Uni Research for providing a base funding grant to refine some of the presented features. Finally, authors Buckley and Dewez are grateful to Institut Carnot BRGM for the RADIOGEOM mobility grant supporting the writing of this paper. Corbin Kling and one anonymous reviewer helped improve the final manuscript.Peer reviewedPublisher PD

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    A 3D Data Intensive Tele-immersive Grid

    Get PDF
    International audienceNetworked virtual environments like Second Life enable distant people to meet for leisure as well as work. But users are represented through avatars controlled by keyboards and mouses, leading to a low sense of presence especially regarding body language. Multi-camera real-time 3D modeling offers a way to ensure a significantly higher sense of presence. But producing quality geometries, well textured, and to enable distant user tele-presence in non trivial virtual environments is still a challenge today. In this paper we present a tele-immersive system based on multi-camera 3D modeling. Users from distant sites are immersed in a rich virtual environment served by a parallel terrain rendering engine. Distant users, present through their 3D model, can perform some local interactions while having a strong visual presence. We experimented our system between three large cities a few hundreds kilometers apart from each other. This work demonstrate the feasibility of a rich 3D multimedia environment ensuring users a strong sense of presence

    Streaming and 3D mapping of agri-data on mobile devices

    Get PDF
    Farm monitoring and operations generate heterogeneous AGRI-data from a variety of different sources that have the potential to be delivered to users ‘on the go’ and in the field to inform farm decision making. A software framework capable of interfacing with existing web mapping services to deliver in-field farm data on commodity mobile hardware was developed and tested. This raised key research challenges related to: robustness of data steaming methods under typical farm connectivity scenarios, and mapping and 3D rendering of AGRI-data in an engaging and intuitive way. The presentation of AGRI-data in a 3D and interactive context was explored using different visualisation techniques; currently the 2D presentation of AGRI- data is the dominant practice, despite the fact that mobile devices can now support sophisticated 3D graphics via programmable pipelines. The testing found that WebSockets were the most reliable streaming method for high resolution image/texture data. From our focus groups there was no single visualisation technique that was preferred demonstrating that a range of methods is a good way to satisfy a large user base. Improved 3D experience on mobile phones is set to revolutionize the multimedia market and a key challenge is identifying useful 3D visualisation methods and navigation tools that support the exploration of data driven 3D interactive visualisation frameworks for AGRI-data
    • …
    corecore