25 research outputs found

    INTERACTIVE IMAGE GEOLOCALIZATION IN AN IMMERSIVE WEB APPLICATION

    Get PDF
    For a long time people have been interested in the past and history and how we can go back in time using a time machine. And while we cannot invent a time machine, at least not yet, we can create a virtual one. Our “virtual” time machine is an interactive web application that allows users to browse through and navigate within historical images (aerial/terrestrial photographs or postcards) that are projected on a 3D photogrammetric model (point cloud or 3D mesh), thus going back in time and interacting with historical 3D models and images. This was achieved by adopting a semiautomatic approach where the user identifies first 6 to 8 hints on the historical image and the photogrammetric model, then this information is used as an entry data to a photogrammetric software that computes the pose and orientation of the image. The purpose of this work, which is part of the ALEGORIA project, is to preserve cultural heritage, to give the users the opportunity to go back in time and study history of a place or simply discover how their hometown looked some years ago

    3D URBAN GEOVISUALIZATION: IN SITU AUGMENTED AND MIXED REALITY EXPERIMENTS

    Get PDF
    In this paper, we assume that augmented reality (AR) and mixed reality (MR) are relevant contexts for 3D urban geovisualization, especially in order to support the design of the urban spaces. We propose to design an in situ MR application, that could be helpful for urban designers, providing tools to interactively remove or replace buildings in situ. This use case requires advances regarding existing geovisualization methods. We highlight the need to adapt and extend existing 3D geovisualization pipelines, in order to adjust the specific requirements for AR/MR applications, in particular for data rendering and interaction. In order to reach this goal, we focus on and implement four elementary in situ and ex situ AR/MR experiments: each type of these AR/MR experiments helps to consider and specify a specific subproblem, i.e. scale modification, pose estimation, matching between scene and urban project realism, and the mix of real and virtual elements through portals, while proposing occlusion handling, rendering and interaction techniques to solve them

    ROAD MARKING EXTRACTION USING A MODEL&DATA-DRIVEN RJ-MCMC

    Get PDF

    Image-Based Rendering of LOD1 3D City Models for traffic-augmented Immersive Street-view Navigation

    No full text
    It may be argued that urban areas may now be modeled with sufficient details for realistic fly-through over the cities at a reasonable price point. Modeling cities at the street level for immersive street-view navigation is however still a very expensive (or even impossible) operation if one tries to match the level of detail acquired by street-view mobile mapping imagery. This paper proposes to leverage the richness of these street-view images with the common availability of nation-wide LOD1 3D city models, using an image-based rendering technique : projective multi-texturing. Such a coarse 3D city model may be used as a lightweight scene proxy of approximate coarse geometry. The images neighboring the interpolated viewpoint are projected onto this scene proxy using their estimated poses and calibrations and blended together according to their relative distance. This enables an immersive navigation within the image dataset that is perfectly equal to – and thus as rich as – original images when viewed from their viewpoint location, and which degrades gracefully in between viewpoint locations. Beyond proving the applicability of this preprocessing-free computer graphics technique to mobile mapping images and LOD1 3D city models, our contributions are three-fold. Firstly, image distortion is corrected online in the GPU, preventing an extra image resampling step. Secondly, externally-computed binary masks may be used to discard pixels corresponding to moving objects. Thirdly, we propose a shadowmap-inspired technique that prevents, at marginal cost, the projective texturing of surfaces beyond the first, as seen from the projected image viewpoint location. Finally, an augmented visualization application is introduced to showcase the proposed immersive navigation: images are unpopulated from vehicles using externally-computed binary masks and repopulated using a 3D visualization of a 2D traffic simulation

    Projective Texturing Uncertain Geometry: silhouette-aware box-filtered blending using integral radial images

    No full text
    Projective texturing is a commonly used image based rendering technique that enables the synthesis of novel views from the blended reprojection of nearby views on a coarse geometry proxy approximating the scene. When scene geometry is inexact, aliasing artefacts occur. This introduces disturbing artefacts in applications such as street-level immersive navigation in mobile mapping imagery, since a pixel-accurate modelling of the scene geometry and all its details is most of the time out of question. The filtered blending approach applies the necessary 1D low-pass filtering on the projective texture to trade out the aliasing artefacts at the cost of some radial blurring. This paper proposes extensions of the filtered blending approach. Firstly, we introduce Integral Radial Images that enable constant time radial box filtering and show how they can be used to apply box-filtered blending in constant time independently of the amount of depth uncertainty. Secondly, we show a very efficient application of filtered blending where the scene geometry is only given by a loose depth interval prior rather than an actual geometry proxy. Thirdly, we propose a silhouette-aware extension of the box-filtered blending that not only account for uncertain depth along the viewing ray but also for uncertain silhouettes that have to be blurred as well

    Multi-view 3D circular target reconstruction with uncertainty analysis

    No full text
    The paper presents an algorithm for reconstruction of 3D circle from its apparition in n images. It supposes that camera poses are known up to an uncertainty. They will be considered as observations and will be refined during the reconstruction process. First, circle apparitions will be estimated in every individual image from a set of 2D points using a constrained optimization. Uncertainty of 2D points are propagated in 2D ellipse estimation and leads to covariance matrix of ellipse parameters. In 3D reconstruction process ellipse and camera pose parameters are considered as observations with known covariances. A minimal parametrization of 3D circle enables to model the projection of circle in image without any constraint. The reconstruction is performed by minimizing the length of observation residuals vector in a non linear Gauss-Helmert model. The output consists in parameters of the corresponding circle in 3D and their covariances. The results are presented on simulated data

    ROAD MARKING EXTRACTION USING A MODEL&DATA-DRIVEN RJ-MCMC

    No full text
    We propose an integrated bottom-up/top-down approach to road-marking extraction from image space. It is based on energy minimization using marked point processes. A generic road marking object model enable us to define universal energy functions that handle various types of road-marking objects (dashed-lines, arrows, characters, etc.). A RJ-MCMC sampler coupled with a simulated annealing is applied to find the configuration corresponding to the minimum of the proposed energy. We used input data measurements to guide the sampler process (data driven RJ-MCMC). The approach is enhanced with a model-driven kernel using preprocessed autocorrelation and inter-correlation of road-marking templates, in order to resolve type and transformation ambiguities. The method is generic and can be applied to detect road-markings in any orthogonal view produced from optical sensors or laser scanners from aerial or terrestrial platforms. We show the results an ortho-image computed from ground-based laser scanning
    corecore