85 research outputs found

    Analysis and Exploitation of Automatically Generated Scene Structure from Aerial Imagery

    Get PDF
    The recent advancements made in the field of computer vision, along with the ever increasing rate of computational power has opened up opportunities in the field of automated photogrammetry. Many researchers have focused on using these powerful computer vision algorithms to extract three-dimensional point clouds of scenes from multi-view imagery, with the ultimate goal of creating a photo-realistic scene model. However, geographically accurate three-dimensional scene models have the potential to be exploited for much more than just visualization. This work looks at utilizing automatically generated scene structure from near-nadir aerial imagery to identify and classify objects within the structure, through the analysis of spatial-spectral information. The limitation to this type of imagery is imposed due to the common availability of this type of aerial imagery. Popular third-party computer-vision algorithms are used to generate the scene structure. A voxel-based approach for surface estimation is developed using Manhattan-world assumptions. A surface estimation confidence metric is also presented. This approach provides the basis for further analysis of surface materials, incorporating spectral information. Two cases of spectral analysis are examined: when additional hyperspectral imagery of the reconstructed scene is available, and when only R,G,B spectral information can be obtained. A method for registering the surface estimation to hyperspectral imagery, through orthorectification, is developed. Atmospherically corrected hyperspectral imagery is used to assign reflectance values to estimated surface facets for physical simulation with DIRSIG. A spatial-spectral region growing-based segmentation algorithm is developed for the R,G,B limited case, in order to identify possible materials for user attribution. Finally, an analysis of the geographic accuracy of automatically generated three-dimensional structure is performed. An end-to-end, semi-automated, workflow is developed, described, and made available for use

    Aerial video geo-registration using terrain models from dense and coherent stereo matching

    Get PDF
    In the context of aerial imagery, one of the first steps toward a coherent processing of the information contained in multiple images is geo-registration, which consists in assigning geographic 3D coordinates to the pixels of the image. This enables accurate alignment and geo-positioning of multiple images, detection of moving objects and fusion of data acquired from multiple sensors. To solve this problem there are different approaches that require, in addition to a precise characterization of the camera sensor, high resolution referenced images or terrain elevation models, which are usually not publicly available or out of date. Building upon the idea of developing technology that does not need a reference terrain elevation model, we propose a geo-registration technique that applies variational methods to obtain a dense and coherent surface elevation model that is used to replace the reference model. The surface elevation model is built by interpolation of scattered 3D points, which are obtained in a two-step process following a classical stereo pipeline: first, coherent disparity maps between image pairs of a video sequence are estimated and then image point correspondences are back-projected. The proposed variational method enforces continuity of the disparity map not only along epipolar lines (as done by previous geo-registration techniques) but also across them, in the full 2D image domain. In the experiments, aerial images from synthetic video sequences have been used to validate the proposed technique

    Landsat Imagery from a CubeSat: Results and Operational Lessons from the R3 Satellite\u27s First 18 Months in Space

    Get PDF
    R3 is a 3-U CubeSat launched on a RocketLab Electron into a 500 km circular orbit at 85° inclination on December 16th, 2018. The spacecraft flies a multispectral sensor that takes data in the six Landsat visible and near infrared bands. The R3 sensor mates a custom refractive telescope with a Materion Precision Optics Landsat filter, and an ON Semiconductor fast-framing high-sensitivity Si CMOS array, to produce 50-km wide, 44-m resolution Landsat-like image strips. Data are taken in push-broom mode and are downlinked via a 100Mbps compact lasercom system. Frames are then co-added on the ground in time-delay-integration (TDI) fashion to increase signal-to-noise ratio and create multi-spectral Earth images from the compact sensor. The system is an engineering concept demonstration of a compact multispectral sensor in CubeSat form. We describe our ConOps, flight operations, sensor focus and alignment, initial imaging check out, and initial comparisons of R3 data to Landsat-8 imagery of the same Earth locations. RGB, color infrared, and normalized differential vegetation index (NDVI) products are compared between CUMULOS and Landsat-8. Results show good multispectral image quality from the CubeSat sensor, and illustrate the ability of R3 to detect vegetation and other features in a manner similar to Landsat, as well as the challenge in perfectly exposing all 6 VIS/NIR Landsat bands using our commercial 10-bit CMOS array. We also highlight the performance of the compact laser communications system which enabled the successful performance of this mission

    On the use of uavs in mining and archaeology - geo-accurate 3d reconstructions using various platforms and terrestrial views

    Get PDF
    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to generate highly detailed, accurate and complete reconstructions

    Precision Navigation Using Pre-Georegistered Map Data

    Get PDF
    Navigation performance in small unmanned aerial vehicles (UAVs) is adversely affected by limitations in current sensor technology for small, lightweight sensors. Because most UAVs are equipped with cameras for mission-related purposes, it is advantageous to utilize the camera to improve the navigation solution. This research improves navigation by matching camera images to a priori georegistered image data and combining this update with existing image-aided navigation technology. The georegistration matching is done by projecting the images into the same plane, extracting features using the techniques Scale Invariant Feature Transform (SIFT) [5] and Speeded-Up Robust Features (SURF) [3]. The features are matched using the Random Scale and Consensus (RANSAC) [4] algorithm, which generates a model to transform feature locations from one image to another. In addition to matching the image taken by the UAV to the stored images, the effect of matching the images after transforming one to the perspective of the other is investigated. One of the chief advantages of this method is the ability to provide both an absolute position and attitude update. Test results using 15 minutes of aerial video footage at altitudes ranging from 1000m to 1500m demonstrated that transforming the image data from one perspective to the other yields an improvement in performance. The best system configuration uses SIFT on an image that was transformed into the satellite perspective and matched to satellite map data. This process is able to achieve attitude errors on the order of milliradians, and position errors on the order of a few meters vertically. The along track, cross track, and heading errors are higher than expected. Further work is needed on reliability. Once this is accomplished, it should improve the navigation solution of an aircraft, or even provide navigation grade position and attitude estimates in a GPS denied environment

    Toward Image-Based Three-Dimensional Reconstruction from Cubesats: Impacts of Spatial Resolution and SNR on Point Cloud Quality

    Get PDF
    The adoption of cube-satellites (cubesats) by the space community has drastically lowered the cost of access to space and reduced the development lifecycle from the hundreds of millions of dollars spent on traditional decade-long programs. Rapid deployment and low cost are attractive features of cubesat-based imaging that are conducive to applications such as disaster response and monitoring. One proposed application is 3D surface modeling through a high revisit rate constellation of cubesat imagers. This work begins with the characterization of an existing design for a cubesat imager based on ground sampled distance (GSD), signal-to-noise ratio (SNR), and smear. From this characterization, an existing 3D workflow is applied to datasets that have been degraded within the regime of spatial resolutions and signal-to-noise ratios anticipated for the cubesat imager. The fidelity of resulting point clouds are assessed locally for both an urban and a natural scene. The height of a building and normals to its surfaces are calculated from the urban scene, while quarry depth estimates and rough volume estimates of a pile of rocks are produced from the natural scene. Though the reconstructed scene geometry and completeness of the scene suffer noticeably from the degraded imagery, results indicate that useful information can still be extracted using some of these techniques up to a simulated GSD of 2 meters
    • …
    corecore