138 research outputs found

    View Selection with Geometric Uncertainty Modeling

    Full text link
    Estimating positions of world points from features observed in images is a key problem in 3D reconstruction, image mosaicking,simultaneous localization and mapping and structure from motion. We consider a special instance in which there is a dominant ground plane G\mathcal{G} viewed from a parallel viewing plane S\mathcal{S} above it. Such instances commonly arise, for example, in aerial photography. Consider a world point g∈Gg \in \mathcal{G} and its worst case reconstruction uncertainty Ξ΅(g,S)\varepsilon(g,\mathcal{S}) obtained by merging \emph{all} possible views of gg chosen from S\mathcal{S}. We first show that one can pick two views sps_p and sqs_q such that the uncertainty Ξ΅(g,{sp,sq})\varepsilon(g,\{s_p,s_q\}) obtained using only these two views is almost as good as (i.e. within a small constant factor of) Ξ΅(g,S)\varepsilon(g,\mathcal{S}). Next, we extend the result to the entire ground plane G\mathcal{G} and show that one can pick a small subset of Sβ€²βŠ†S\mathcal{S'} \subseteq \mathcal{S} (which grows only linearly with the area of G\mathcal{G}) and still obtain a constant factor approximation, for every point g∈Gg \in \mathcal{G}, to the minimum worst case estimate obtained by merging all views in S\mathcal{S}. Finally, we present a multi-resolution view selection method which extends our techniques to non-planar scenes. We show that the method can produce rich and accurate dense reconstructions with a small number of views. Our results provide a view selection mechanism with provable performance guarantees which can drastically increase the speed of scene reconstruction algorithms. In addition to theoretical results, we demonstrate their effectiveness in an application where aerial imagery is used for monitoring farms and orchards

    Combining Professionalism, Nation Building and Public Service: The Professional Project of the Israeli Bar 1928-2002

    Get PDF
    Measuring tree morphology for phenotyping is an essential but labor-intensive activity in horticulture. Researchers often rely on manual measurements which may not be accurate for example when measuring tree volume. Recent approaches on automating the measurement process rely on LIDAR measurements coupled with high-accuracy GPS. Usually each side of a row is reconstructed independently and then merged using GPS information. Such approaches have two disadvantages: (1) they rely on specialized and expensive equipment, and (2) since the reconstruction process does not simultaneously use information from both sides, side reconstructions may not be accurate. We also show that standard loop closure methods do not necessarily align tree trunks well. In this paper, we present a novel vision system that employs only an RGB-D camera to estimate morphological parameters. A semantics-based mapping algorithm merges the two-sides 3D models of tree rows, where integrated semantic information is obtained and refined by robust fitting algorithms. We focus on measuring tree height, canopy volume and trunk diameter from the optimized 3D model. Experiments conducted in real orchard

    Predicting Energy Consumption of Ground Robots On Uneven Terrains

    Full text link
    Optimizing energy consumption for robot navigation in fields requires energy-cost maps. However, obtaining such a map is still challenging, especially for large, uneven terrains. Physics-based energy models work for uniform, flat surfaces but do not generalize well to these terrains. Furthermore, slopes make the energy consumption at every location directional and add to the complexity of data collection and energy prediction. In this paper, we address these challenges in a data-driven manner. We consider a function which takes terrain geometry and robot motion direction as input and outputs expected energy consumption. The function is represented as a ResNet-based neural network whose parameters are learned from field-collected data. The prediction accuracy of our method is within 12% of the ground truth in our test environments that are unseen during training. We compare our method to a baseline method in the literature: a method using a basic physics-based model. We demonstrate that our method significantly outperforms it by more than 10% measured by the prediction error. More importantly, our method generalizes better when applied to test data from new environments with various slope angles and navigation directions
    • …
    corecore