33 research outputs found

    3D reconstruction of large scale city models as a support to Sustainable Development

    Get PDF
    International audienceNo part of the economic community can now escape from the urgent issues related to global warming, carbon footprint and reducing energy consumption. Nevertheless, the construction sector is particularly under pressure. Indeed, it is one of the biggest consumers of energy. It also largely contributes to the massive use of some critical resources (such as energy, water, materials and space...) and is responsible for a large portion of greenhouse gas emissions. In that context, the paper explores new approaches for urban planning by combining Virtual Environments and Simulations to address sustainability issues. These approaches are based on the possibilities of reconstructing 3D models of the built environment using standard photographs taken with off-the shelf hand-held digital cameras. The 3D models can then be combined with simulations in order to address sustainable urban development issues

    Predicting the Next Best View for 3D Mesh Refinement

    Full text link
    3D reconstruction is a core task in many applications such as robot navigation or sites inspections. Finding the best poses to capture part of the scene is one of the most challenging topic that goes under the name of Next Best View. Recently, many volumetric methods have been proposed; they choose the Next Best View by reasoning over a 3D voxelized space and by finding which pose minimizes the uncertainty decoded into the voxels. Such methods are effective, but they do not scale well since the underlaying representation requires a huge amount of memory. In this paper we propose a novel mesh-based approach which focuses on the worst reconstructed region of the environment mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and an energy function over the worst regions of the mesh which takes into account the mutual parallax with respect to the previous cameras, the angle of incidence of the viewing ray to the surface and the visibility of the region. We test our approach over a well known dataset and achieve state-of-the-art results.Comment: 13 pages, 5 figures, to be published in IAS-1

    AN EXPERIMENT FOR ZSCAN EFFICIENCY IN SURFACE MONITORING

    Get PDF
    Several geophysical processes, involving crustal deformation, can be studied and monitored by means of the comparison of multitemporal Digital Terrain Models (DTM) and/or Digital Surface Models (DSM): deformation patterns, displacements, surface variations, volumes involved in mass movements and other physical features can be observed and quantified providing useful information on the geomorphological variations (Butler et al., 1998; Kaab and Funk, 1999; Mora et al., 2003; van Westen and Lulie Getahun, 2003; Pesci et al., 2004; Fabris and Pesci, 2005; Baldi et al., 2005; Pesci et al., 2007; Baldi et al., 2008). Many techniques, including GPS kinematic methodology (Beutler et al., 1995), digital aerial and terrestrial photogrammetry (Kraus, 1998), airborne and terrestrial laser scanning (Csatho et al., 2005), remote sensors on space-borne platforms, both optical and radar stereo option, satellite SAR interferometry (Fraser et al., 2002), are suitable surveying methods for the acquisition of precise and reliable 3D or 2.5D geoinformation. Actually, the technique to capture the evolution of a natural process, rapidly changing the terrain morphology of an area like a volcanic eruption or a rock mass collapse, taking a time of a few seconds or several hours (or more) is the digital photogrammetry. Scientific software exist to manage and process stereoscopic photogrammetric images, requiring professional operators but, recently, more friendly applications are developed to facilitate and make fast but efficient the analysis

    Globally Optimal Spatio-temporal Reconstruction from Cluttered Videos

    Get PDF
    International audienceWe propose a method for multi-view reconstruction from videos adapted to dynamic cluttered scenes under uncontrolled imaging conditions. Taking visibility into account, and being based on a global optimization of a true spatio-temporal energy, it oilers several desirable properties: no need for silhouettes, robustness to noise, independent from any initialization, no heuristic force, reduced flickering results, etc. Results on real-world data proves the potential of what is, to our knowledge, the only globally optimal spatio-temporal multi-view reconstruction method

    Transductive Segmentation of Textured Meshes

    Get PDF
    International audienceThis paper addresses the problem of segmenting a textured mesh into objects or object classes, consistently with user-supplied seeds. We view this task as transductive learning and use the flexibility of kernel-based weights to incorporate a various number of diverse features. Our method combines a Laplacian graph regularizer that enforces spatial coherence in label propagation and an SVM classifier that ensures dissemination of the seeds characteristics. Our interactive framework allows to easily specify classes seeds with sketches drawn on the mesh and potentially refine the segmentation. We obtain qualitatively good segmentations on several architectural scenes and show the applicability of our method to outliers removing

    Hierarchical Surface Prediction for 3D Object Reconstruction

    Full text link
    Recently, Convolutional Neural Networks have shown promising results for 3D geometry prediction. They can make predictions from very little input data such as a single color image. A major limitation of such approaches is that they only predict a coarse resolution voxel grid, which does not capture the surface of the objects well. We propose a general framework, called hierarchical surface prediction (HSP), which facilitates prediction of high resolution voxel grids. The main insight is that it is sufficient to predict high resolution voxels around the predicted surfaces. The exterior and interior of the objects can be represented with coarse resolution voxels. Our approach is not dependent on a specific input type. We show results for geometry prediction from color images, depth images and shape completion from partial voxel grids. Our analysis shows that our high resolution predictions are more accurate than low resolution predictions.Comment: 3DV 201

    GeoDesc: Learning Local Descriptors by Integrating Geometry Constraints

    Full text link
    Learned local descriptors based on Convolutional Neural Networks (CNNs) have achieved significant improvements on patch-based benchmarks, whereas not having demonstrated strong generalization ability on recent benchmarks of image-based 3D reconstruction. In this paper, we mitigate this limitation by proposing a novel local descriptor learning approach that integrates geometry constraints from multi-view reconstructions, which benefits the learning process in terms of data generation, data sampling and loss computation. We refer to the proposed descriptor as GeoDesc, and demonstrate its superior performance on various large-scale benchmarks, and in particular show its great success on challenging reconstruction tasks. Moreover, we provide guidelines towards practical integration of learned descriptors in Structure-from-Motion (SfM) pipelines, showing the good trade-off that GeoDesc delivers to 3D reconstruction tasks between accuracy and efficiency.Comment: Accepted to ECCV'1

    On the use of uavs in mining and archaeology - geo-accurate 3d reconstructions using various platforms and terrestrial views

    Get PDF
    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to generate highly detailed, accurate and complete reconstructions
    corecore