330,763 research outputs found

    Adaptive View Planning for Aerial 3D Reconstruction

    Full text link
    With the proliferation of small aerial vehicles, acquiring close up aerial imagery for high quality reconstruction of complex scenes is gaining importance. We present an adaptive view planning method to collect such images in an automated fashion. We start by sampling a small set of views to build a coarse proxy to the scene. We then present (i)~a method that builds a view manifold for view selection, and (ii) an algorithm to select a sparse set of views. The vehicle then visits these viewpoints to cover the scene, and the procedure is repeated until reconstruction quality converges or a desired level of quality is achieved. The view manifold provides an effective efficiency/quality compromise between using the entire 6 degree of freedom pose space and using a single view hemisphere to select the views. Our results show that, in contrast to existing "explore and exploit" methods which collect only two sets of views, reconstruction quality can be drastically improved by adding a third set. They also indicate that three rounds of data collection is sufficient even for very complex scenes. We compare our algorithm to existing methods in three challenging scenes. We require each algorithm to select the same number of views. Our algorithm generates views which produce the least reconstruction error

    Cloud-free resolution element statistics program

    Get PDF
    Computer program computes number of cloud-free elements in field-of-view and percentage of total field-of-view occupied by clouds. Human error is eliminated by using visual estimation to compute cloud statistics from aerial photographs

    X-View: Graph-Based Semantic Multi-View Localization

    Full text link
    Global registration of multi-view robot data is a challenging task. Appearance-based global localization approaches often fail under drastic view-point changes, as representations have limited view-point invariance. This work is based on the idea that human-made environments contain rich semantics which can be used to disambiguate global localization. Here, we present X-View, a Multi-View Semantic Global Localization system. X-View leverages semantic graph descriptor matching for global localization, enabling localization under drastically different view-points. While the approach is general in terms of the semantic input data, we present and evaluate an implementation on visual data. We demonstrate the system in experiments on the publicly available SYNTHIA dataset, on a realistic urban dataset recorded with a simulator, and on real-world StreetView data. Our findings show that X-View is able to globally localize aerial-to-ground, and ground-to-ground robot data of drastically different view-points. Our approach achieves an accuracy of up to 85 % on global localizations in the multi-view case, while the benchmarked baseline appearance-based methods reach up to 75 %
    corecore