2,320 research outputs found

    LOD Generation for Urban Scenes

    Get PDF
    International audienceWe introduce a novel approach that reconstructs 3D urban scenes in the form of levels of detail (LODs). Starting from raw data sets such as surface meshes generated by multi-view stereo systems, our algorithm proceeds in three main steps: classification, abstraction and reconstruction. From geometric attributes and a set of semantic rules combined with a Markov random field, we classify the scene into four meaningful classes. The abstraction step detects and regularizes planar structures on buildings, fits icons on trees, roofs and facades, and performs filtering and simplification for LOD generation. The abstracted data are then provided as input to the reconstruction step which generates watertight buildings through a min-cut formula-tion on a set of 3D arrangements. Our experiments on complex buildings and large scale urban scenes show that our approach generates meaningful LODs while being robust and scalable. By combining semantic segmentation and abstraction it also outperforms general mesh approximation ap-proaches at preserving urban structures

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Procedural digital twin generation for co-creating in VR focusing on vegetation

    Get PDF
    An early-stage development of a Digital Twin (DT) in Virtual Reality (VR) is presented, aiming for civic engagement in a new urban development located in an area that is a forest today. The area is presently used for recreation. For the developer, it is important both to communicate how the new development will affect the forest and allow for feedback from the citizen. High quality DT models are time-consuming to generate, especially for VR. Current model generation methods require the model developer to manually design the virtual environment. Furthermore, they are not scalable when multiple scenarios are required as a project progresses. This study aimed to create an automated, procedural workflow to generate DT models and visualize large-scale data in VR with a focus on existing green structures as a basis for participatory approaches. Two versions of the VR prototype were developed in close cooperation with the urban developer and evaluated in two user tests. A procedural workflow was developed for generating DT models and integrated into the VR application. For the green structures, efforts focused on the vegetation, such as realistic representation and placement of different types of trees and bushes. Only navigation functions were enabled in the first user test with practitioners (9 participants). Interactive functions were enabled in the second user test with pupils (age 15, 9 participants). In both tests, the researchers observed the participants and carried out short reflective interviews. The user test evaluation focussed on the perception of the vegetation, general perception of the VR environment, interaction, and navigation. The results show that the workflow is effective, and the users appreciate green structure representations in VR environments in both user tests. Based on the workflow, similar scenes can be created for any location in Sweden. Future development needs to concentrate on the refinement of buildings and information content. A challenge will be balancing the level of detail for communication with residents

    A Novel Building Temperature Simulation Approach Driven by Expanding Semantic Segmentation Training Datasets with Synthetic Aerial Thermal Images

    Get PDF
    Multi-sensor imagery data has been used by researchers for the image semantic segmentation of buildings and outdoor scenes. Due to multi-sensor data hunger, researchers have implemented many simulation approaches to create synthetic datasets, and they have also synthesized thermal images because such thermal information can potentially improve segmentation accuracy. However, current approaches are mostly based on the laws of physics and are limited to geometric models’ level of detail (LOD), which describes the overall planning or modeling state. Another issue in current physics-based approaches is that thermal images cannot be aligned to RGB images because the configurations of a virtual camera used for rendering thermal images are difficult to synchronize with the configurations of a real camera used for capturing RGB images, which is important for segmentation. In this study, we propose an image translation approach to directly convert RGB images to simulated thermal images for expanding segmentation datasets. We aim to investigate the benefits of using an image translation approach for generating synthetic aerial thermal images and compare those approaches with physics-based approaches. Our datasets for generating thermal images are from a city center and a university campus in Karlsruhe, Germany. We found that using the generating model established by the city center to generate thermal images for campus datasets performed better than using the latter to generate thermal images for the former. We also found that using a generating model established by one building style to generate thermal images for datasets with the same building styles performed well. Therefore, we suggest using training datasets with richer and more diverse building architectural information, more complex envelope structures, and similar building styles to testing datasets for an image translation approach
    • …
    corecore