1,492 research outputs found

    Time and Space Coherent Occlusion Culling for Tileable Extended 3D Worlds

    Get PDF
    International audienceIn order to interactively render large virtual worlds, the amount of 3D geometry passed to the graphics hardware must be kept to a minimum. Typical solutions to this problem include the use of potentially visible sets and occlusion culling, however, these solutions do not scale well, in time nor in memory, with the size of a virtual world. We propose a fast and inexpensive variant of occlusion culling tailored to a simple tiling scheme that improves scalability while maintaining very high performance. Tile visibilities are evaluated with hardwareaccelerated occlusion queries, and in-tile rendering is rapidly computed using BVH instantiation and any visibility method; we use the CHC++ occlusion culling method for its good general performance. Tiles are instantiated only when tested locally for visibility, thus avoiding the need for a preconstructed global structure for the complete world. Our approach can render large-scale, diversified virtual worlds with complex geometry, such as cities or forests, all at high performance and with a modest memory footprint

    Terrain guided multi-level instancing of highly complex plant populations

    Get PDF

    PlaNet - Photo Geolocation with Convolutional Neural Networks

    Full text link
    Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model
    • …
    corecore