1,812 research outputs found

    Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    Full text link
    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and Automation (ICRA '15), Seattle, WA, US

    Orientation of oblique airborne image sets - Experiences from the ISPRS/Eurosdr benchmark on multi-platform photogrammetry

    Get PDF
    During the last decade the use of airborne multi camera systems increased significantly. The development in digital camera technology allows mounting several mid- or small-format cameras efficiently onto one platform and thus enables image capture under different angles. Those oblique images turn out to be interesting for a number of applications since lateral parts of elevated objects, like buildings or trees, are visible. However, occlusion or illumination differences might challenge image processing. From an image orientation point of view those multi-camera systems bring the advantage of a better ray intersection geometry compared to nadir-only image blocks. On the other hand, varying scale, occlusion and atmospheric influences which are difficult to model impose problems to the image matching and bundle adjustment tasks. In order to understand current limitations of image orientation approaches and the influence of different parameters such as image overlap or GCP distribution, a commonly available dataset was released. The originally captured data comprises of a state-of-the-art image block with very high overlap, but in the first stage of the so-called ISPRS/EUROSDR benchmark on multi-platform photogrammetry only a reduced set of images was released. In this paper some first results obtained with this dataset are presented. They refer to different aspects like tie point matching across the viewing directions, influence of the oblique images onto the bundle adjustment, the role of image overlap and GCP distribution. As far as the tie point matching is concerned we observed that matching of overlapping images pointing to the same cardinal direction, or between nadir and oblique views in general is quite successful. Due to the quite different perspective between images of different viewing directions the standard tie point matching, for instance based on interest points does not work well. How to address occlusion and ambiguities due to different views onto objects is clearly a non-solved research problem so far. In our experiments we also confirm that the obtainable height accuracy is better when all images are used in bundle block adjustment. This was also shown in other research before and is confirmed here. Not surprisingly, the large overlap of 80/80% provides much better object space accuracy – random errors seem to be about 2-3fold smaller compared to the 60/60% overlap. A comparison of different software approaches shows that newly emerged commercial packages, initially intended to work with small frame image blocks, do perform very well

    On the use of uavs in mining and archaeology - geo-accurate 3d reconstructions using various platforms and terrestrial views

    Get PDF
    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to generate highly detailed, accurate and complete reconstructions

    Hierarchical structure-and-motion recovery from uncalibrated images

    Full text link
    This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D struc- ture from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI

    ON THE USE OF UAVS IN MINING AND ARCHAEOLOGY - GEO-ACCURATE 3D RECONSTRUCTIONS USING VARIOUS PLATFORMS AND TERRESTRIAL VIEWS

    Get PDF
    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to generate highly detailed, accurate and complete reconstructions

    A HAND-HELD STRUCTURE FROM MOTION PHOTOGRAMMETRIC APPROACH TO RIPARIAN AND STREAM ASSEESSMENT AND MONITORING

    Get PDF
    Two of the biggest weaknesses in stream restoration and monitoring are: 1) subjective estimation and subsequent comparison of changes in channel form, vegetative cover, and in-stream habitat; and 2) the high costs in terms of financing, human resources, and time necessary to make these estimates. Remote sensing can be used to remedy these weaknesses and save organizations focused on restoration both money and time. However, implementing traditional remote sensing approaches via autonomous aerial systems or light detection and ranging systems is either prohibitively expensive or impossible along small streams with dense vegetation. Hand-held Structure from Motion Multi-view Stereo (SfM-MVS) photogrammetric technology can solve these problems by offering a resource efficient approach for producing 3D Models for a variety of environments. SfM-MVS photogrammetric technology is the result of cutting-edge advances in computer vision algorithms and discipline-specific research in the geosciences. This study found that images taken by GoPro, iPhone, and Digital Single-Lens Reflex cameras were all capable of producing 3D representations of heavily vegetated stream corridors with minimal image post-processing using workflows within Agisoft Metashape™. Analysis within Agisoft Metashape™ produced expected measurements from 3D textured mesh models, digital elevation models, and orthomosaics that were comparable to the physical measurements taken at the time of each survey using an arbitrary latitude, longitude, and elevation classification scheme. The methods described in this study could be applied in future stream restoration and monitoring efforts as a means to complement in person collection and measurement while limiting effort andmoney spent

    Featuremetric Refined Structure From Motion with a Hand-held Camera and Point Cloud Registration in Urban Scenarios

    Get PDF
    Structure from Motion (SfM), the task of recovering 3D scene structure and camera poses from 2D images or video frames, is a prominent topic in 3D Computer Vision. SfM has applications in various areas such as 3D modeling, augmented reality, robotics, and autonomous systems. Recent research has made significant improvements in the accuracy and the challenges associated with SfM. This thesis reviews and compares state-of-the-art approaches with a special focus on "Pixel-Perfect Structure-from-Motion with Featuremetric Refinement" paper. In our experiment, several videos from the city of Padova were captured using a bike-mounted camera and processed through the SfM algorithm. The generated 3D reconstructions are refined and re-evaluated after applying the aforementioned method. Next, an algorithm is developed to register the generated local point clouds with a global, georeferenced point cloud of the whole city acquired by an airplane equipped with a high-resolution LiDAR. Qualitative and quantitative experiments demonstrate promising results in generating accurate 3D reconstruction and consistent alignments between the reconstructed local point clouds and the global point cloud
    • …
    corecore