893 research outputs found

    Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    Full text link
    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and Automation (ICRA '15), Seattle, WA, US

    Review of the mathematical foundations of data fusion techniques in surface metrology

    Get PDF
    The recent proliferation of engineered surfaces, including freeform and structured surfaces, is challenging current metrology techniques. Measurement using multiple sensors has been proposed to achieve enhanced benefits, mainly in terms of spatial frequency bandwidth, which a single sensor cannot provide. When using data from different sensors, a process of data fusion is required and there is much active research in this area. In this paper, current data fusion methods and applications are reviewed, with a focus on the mathematical foundations of the subject. Common research questions in the fusion of surface metrology data are raised and potential fusion algorithms are discussed

    Automated 3D scene reconstruction from open geospatial data sources: airborne laser scanning and a 2D topographic database

    Get PDF
    Open geospatial data sources provide opportunities for low cost 3D scene reconstruction. In this study, based on a sparse airborne laser scanning (ALS) point cloud (0.8 points/m2) obtained from open source databases, a building reconstruction pipeline for CAD building models was developed. The pipeline includes voxel-based roof patch segmentation, extraction of the key-points representing the roof patch outline, step edge identification and adjustment, and CAD building model generation. The advantages of our method lie in generating CAD building models without the step of enforcing the edges to be parallel or building regularization. Furthermore, although it has been challenging to use sparse datasets for 3D building reconstruction, our result demonstrates the great potential in such applications. In this paper, we also investigated the applicability of open geospatial datasets for 3D road detection and reconstruction. Road central lines were acquired from an open source 2D topographic database. ALS data were utilized to obtain the height and width of the road. A constrained search method (CSM) was developed for road width detection. The CSM method was conducted by splitting a given road into patches according to height and direction criteria. The road edges were detected patch by patch. The road width was determined by the average distance from the edge points to the central line. As a result, 3D roads were reconstructed from ALS and a topographic database

    Automatic Fracture Orientation Extraction from SfM Point Clouds

    Get PDF
    Geology seeks to understand the history of the Earth and its surface processes through charac- terisation of surface formations and rock units. Chief among the geologists’ tools are rock unit orientation measurements, such as Strike, Dip and Dip Direction. These allow an understanding of both surface and sub-structure on both the local and macro scale. Although the way these techniques can be used to characterise geology are well understood, the need to collect these measurements by hand adds time and expense to the work of the geologist, precludes spontaneity in field work, and coverage is limited to where the geologist can physically reach. In robotics and computer vision, multi-view geometry techniques such as Structure from Motion (SfM) allows reconstructions of objects and scenes using multiple camera views. SfM-based techniques provide advantages over Lidar-type techniques, in areas such as cost and flexibility of use in more varied environmental conditions, while sacrificing extreme levels of fidelity. Regardless of this, camera based techniques such as SfM, have developed to the point where accuracy is possible in the decimetre range. Here is presented a system to automate the measurement of Strike, Dip and Dip Direction using multi-view geometry from video. Rather than deriving measurements using a method applied to the images, such as the Hough Transform, this method takes measurements directly from the software generated point cloud. Point cloud noise is mitigated using a Mahalanobis distance implementation. Significant structure is characterised using a k-nearest neighbour region growing algorithm, and final surface orientations are quantified using the plane, and normal direction cosines

    Multi-class point cloud completion networks for 3D cardiac anatomy reconstruction from cine magnetic resonance images

    Full text link
    Cine magnetic resonance imaging (MRI) is the current gold standard for the assessment of cardiac anatomy and function. However, it typically only acquires a set of two-dimensional (2D) slices of the underlying three-dimensional (3D) anatomy of the heart, thus limiting the understanding and analysis of both healthy and pathological cardiac morphology and physiology. In this paper, we propose a novel fully automatic surface reconstruction pipeline capable of reconstructing multi-class 3D cardiac anatomy meshes from raw cine MRI acquisitions. Its key component is a multi-class point cloud completion network (PCCN) capable of correcting both the sparsity and misalignment issues of the 3D reconstruction task in a unified model. We first evaluate the PCCN on a large synthetic dataset of biventricular anatomies and observe Chamfer distances between reconstructed and gold standard anatomies below or similar to the underlying image resolution for multiple levels of slice misalignment. Furthermore, we find a reduction in reconstruction error compared to a benchmark 3D U-Net by 32% and 24% in terms of Hausdorff distance and mean surface distance, respectively. We then apply the PCCN as part of our automated reconstruction pipeline to 1000 subjects from the UK Biobank study in a cross-domain transfer setting and demonstrate its ability to reconstruct accurate and topologically plausible biventricular heart meshes with clinical metrics comparable to the previous literature. Finally, we investigate the robustness of our proposed approach and observe its capacity to successfully handle multiple common outlier conditions

    To 3D or Not 3D: Choosing a Photogrammetry Workflow for Cultural Heritage Groups

    Get PDF
    The 3D reconstruction of real-world heritage objects using either a laser scanner or 3D modelling software is typically expensive and requires a high level of expertise. Image-based 3D modelling software, on the other hand, offers a cheaper alternative, which can handle this task with relative ease. There also exists free and open source (FOSS) software, with the potential to deliver quality data for heritage documentation purposes. However, contemporary academic discourse seldom presents survey-based feature lists or a critical inspection of potential production pipelines, nor typically provides direction and guidance for non-experts who are interested in learning, developing and sharing 3D content on a restricted budget. To address the above issues, a set of FOSS were studied based on their offered features, workflow, 3D processing time and accuracy. Two datasets have been used to compare and evaluate the FOSS applications based on the point clouds they produced. The average deviation to ground truth data produced by a commercial software application (Metashape, formerly called PhotoScan) was used and measured with CloudCompare software. 3D reconstructions generated from FOSS produce promising results, with significant accuracy, and are easy to use. We believe this investigation will help non-expert users to understand the photogrammetry and select the most suitable software for producing image-based 3D models at low cost for visualisation and presentation purposes

    Difference of Normals as a Multi-Scale Operator in Unorganized Point Clouds

    Full text link
    A novel multi-scale operator for unorganized 3D point clouds is introduced. The Difference of Normals (DoN) provides a computationally efficient, multi-scale approach to processing large unorganized 3D point clouds. The application of DoN in the multi-scale filtering of two different real-world outdoor urban LIDAR scene datasets is quantitatively and qualitatively demonstrated. In both datasets the DoN operator is shown to segment large 3D point clouds into scale-salient clusters, such as cars, people, and lamp posts towards applications in semi-automatic annotation, and as a pre-processing step in automatic object recognition. The application of the operator to segmentation is evaluated on a large public dataset of outdoor LIDAR scenes with ground truth annotations.Comment: To be published in proceedings of 3DIMPVT 201

    From Point Cloud to Textured Model, the Zamani Laser Scanning Pipeline in Heritage Documentation

    Get PDF
    The paper describes the stages of the laser scanning pipeline from data acquisition to the final 3D computer model based on experiences gained during the ongoing creation of data for the African Cultural Heritage Sites and Landscapes database. The various processes are briefly discussed and challenges are highlighted which need to be addressed to develop the full potential of laser scanning. Experiences with fieldwork, scan registration, hole-filling, data cleaning, modelling and texturing are reported. The potential strengths and weaknesses of the emerging tool of “Structure from Motion” are briefly explored for their potential use in combination with laser scanning
    • …
    corecore