417 research outputs found

    A Sparse Voxel Octree-Based Framework for Computing Solar Radiation Using 3D City Models

    Get PDF
    abstract: An effective three-dimensional (3D) data representation is required to assess the spatial distribution of the photovoltaic potential over urban building roofs and facades using 3D city models. Voxels have long been used as a spatial data representation, but practical applications of the voxel representation have been limited compared with rasters in traditional two-dimensional (2D) geographic information systems (GIS). We propose to use sparse voxel octree (SVO) as a data representation to extend the GRASS GIS r.sun solar radiation model from 2D to 3D. The GRASS GIS r.sun model is nested in an SVO-based computing framework. The presented 3D solar radiation computing framework was applied to 3D building groups of different geometric complexities to demonstrate its efficiency and scalability. We presented a method to explicitly compute diffuse shading losses in r.sun, and found that diffuse shading losses can reduce up to 10% of the annual global radiation under clear sky conditions. Hence, diffuse shading losses are of significant importance especially in complex urban environments

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    OctNetFusion: Learning Depth Fusion from Data

    Full text link
    In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.Comment: 3DV 2017, https://github.com/griegler/octnetfusio

    VolumeEVM: A new surface/volume integrated model

    Get PDF
    Volume visualization is a very active research area in the field of scien-tific visualization. The Extreme Vertices Model (EVM) has proven to be a complete intermediate model to visualize and manipulate volume data using a surface rendering approach. However, the ability to integrate the advantages of surface rendering approach with the superiority in visual exploration of the volume rendering would actually produce a very complete visualization and edition system for volume data. Therefore, we decided to define an enhanced EVM-based model which incorporates the volumetric information required to achieved a nearly direct volume visualization technique. Thus, VolumeEVM was designed maintaining the same EVM-based data structure plus a sorted list of density values corresponding to the EVM-based VoIs interior voxels. A function which relates interior voxels of the EVM with the set of densities was mandatory to be defined. This report presents the definition of this new surface/volume integrated model based on the well known EVM encoding and propose implementations of the main software-based direct volume rendering techniques through the proposed model.Postprint (published version

    A Comprehensive Survey of Isocontouring Methods: Applications, Limitations and Perspectives

    Get PDF
    This paper provides a comprehensive overview of approaches to the determination of isocontours and isosurfaces from given data sets. Different algorithms are reported in the literature for this purpose, which originate from various application areas, such as computer graphics or medical imaging procedures. In all these applications, the challenge is to extract surfaces with a specific isovalue from a given characteristic, so called isosurfaces. These different application areas have given rise to solution approaches that all solve the problem of isocontouring in their own way. Based on the literature, the following four dominant methods can be identified: the marching cubes algorithms, the tessellation-based algorithms, the surface nets algorithms and the ray tracing algorithms. With regard to their application, it can be seen that the methods are mainly used in the fields of medical imaging, computer graphics and the visualization of simulation results. In our work, we provide a broad and compact overview of the common methods that are currently used in terms of isocontouring with respect to certain criteria and their individual limitations. In this context, we discuss the individual methods and identify possible future research directions in the field of isocontouring

    An Octree-based proxy for collision detection in large-scale particle systems

    Get PDF
    International audienceParticle systems are important building block for simulating vivid and detail-rich effects in virtual world. One of the most difficult aspects of particle systems has been detecting collisions between particlesand mesh surface. Due to the huge computation, a variety of proxy-based approaches have been proposed recently to perform visually correct simulation. However, all either limit the complexity of the scene, fail toguarantee non-penetration, or are too slow for real-time use with many particles. In this paper, we propose anew octree-based proxy for colliding particles with meshes on the GPU. Our approach works by subdividingthe scene mesh with an octree in which each leaf node associates with a representative normal correspondingto the normals of the triangles that intersect the node. We present a view-visible method, which is suitable forboth closed and non-closed models, to label the empty leaf nodes adjacent to nonempty ones with appropriateback/front property, allowing particles to collide with both sides of the scene mesh. We show how collisionscan be performed robustly on this proxy structure in place of the original mesh, and describe an extension thatallows for fast traversal of the octree structure on the GPU. The experiments show that the proposed methodis fast enough for real-time performance with millions of particles interacting with complex scenes

    Ray Tracing Methods for Point Cloud Rendering

    Get PDF
    State of the art scanning and capturing devices are able to produce surface point cloud models of a wide range of real world objects. The visualization and rendering of enormous point clouds with millions or billions of points is demanding. VR- and AR-applications can utilize embedded real world objects in generating visually pleasing and immersive virtual worlds. In order to achieve convincing real life equivalents in VR, rendering techniques that can replicate realistic material and lighting effects are needed. This can be achieved by utilizing ray tracing methods to render the virtual world onto a monitor or a head-mounted display. Virtual reality applications need real-time stereoscopic rendering with high frame rates and resolution to produce a realistic and comfortable experience. This sets high demands on a point cloud ray tracing pipeline, which needs efficient intersection testing between rays and point cloud models. An easily intersectable global surface can be reconstructed from the point cloud model with, e.g., triangle mesh reconstruction. However, this can be computationally demanding and even wasteful if parts of the model are out of view or occluded. Direct point cloud ray tracing methods consider local features of the point cloud to generate intersectable surfaces only when needed. In this thesis, we survey and compare different methods for directly ray tracing point cloud models without global surface reconstruction. Methods are compared with asymptotic complexity analysis and it is concluded that direct ray tracing of point clouds can be computationally more efficient compared to global surface reconstruction
    • …
    corecore