8,946 research outputs found

    AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming

    Full text link
    The combination of aerial survey capabilities of Unmanned Aerial Vehicles with targeted intervention abilities of agricultural Unmanned Ground Vehicles can significantly improve the effectiveness of robotic systems applied to precision agriculture. In this context, building and updating a common map of the field is an essential but challenging task. The maps built using robots of different types show differences in size, resolution and scale, the associated geolocation data may be inaccurate and biased, while the repetitiveness of both visual appearance and geometric structures found within agricultural contexts render classical map merging techniques ineffective. In this paper we propose AgriColMap, a novel map registration pipeline that leverages a grid-based multimodal environment representation which includes a vegetation index map and a Digital Surface Model. We cast the data association problem between maps built from UAVs and UGVs as a multimodal, large displacement dense optical flow estimation. The dominant, coherent flows, selected using a voting scheme, are used as point-to-point correspondences to infer a preliminary non-rigid alignment between the maps. A final refinement is then performed, by exploiting only meaningful parts of the registered maps. We evaluate our system using real world data for 3 fields with different crop species. The results show that our method outperforms several state of the art map registration and matching techniques by a large margin, and has a higher tolerance to large initial misalignments. We release an implementation of the proposed approach along with the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Mapping and classification of ecologically sensitive marine habitats using unmanned aerial vehicle (UAV) imagery and object-based image analysis (OBIA)

    Get PDF
    Nowadays, emerging technologies, such as long-range transmitters, increasingly miniaturized components for positioning, and enhanced imaging sensors, have led to an upsurge in the availability of new ecological applications for remote sensing based on unmanned aerial vehicles (UAVs), sometimes referred to as “drones”. In fact, structure-from-motion (SfM) photogrammetry coupled with imagery acquired by UAVs offers a rapid and inexpensive tool to produce high-resolution orthomosaics, giving ecologists a new way for responsive, timely, and cost-effective monitoring of ecological processes. Here, we adopted a lightweight quadcopter as an aerial survey tool and object-based image analysis (OBIA) workflow to demonstrate the strength of such methods in producing very high spatial resolution maps of sensitive marine habitats. Therefore, three different coastal environments were mapped using the autonomous flight capability of a lightweight UAV equipped with a fully stabilized consumer-grade RGB digital camera. In particular we investigated a Posidonia oceanica seagrass meadow, a rocky coast with nurseries for juvenile fish, and two sandy areas showing biogenic reefs of Sabelleria alveolata. We adopted, for the first time, UAV-based raster thematic maps of these key coastal habitats, produced after OBIA classification, as a new method for fine-scale, low-cost, and time saving characterization of sensitive marine environments which may lead to a more effective and efficient monitoring and management of natural resource

    Supervised learning on graphs of spatio-temporal similarity in satellite image sequences

    Get PDF
    High resolution satellite image sequences are multidimensional signals composed of spatio-temporal patterns associated to numerous and various phenomena. Bayesian methods have been previously proposed in (Heas and Datcu, 2005) to code the information contained in satellite image sequences in a graph representation using Bayesian methods. Based on such a representation, this paper further presents a supervised learning methodology of semantics associated to spatio-temporal patterns occurring in satellite image sequences. It enables the recognition and the probabilistic retrieval of similar events. Indeed, graphs are attached to statistical models for spatio-temporal processes, which at their turn describe physical changes in the observed scene. Therefore, we adjust a parametric model evaluating similarity types between graph patterns in order to represent user-specific semantics attached to spatio-temporal phenomena. The learning step is performed by the incremental definition of similarity types via user-provided spatio-temporal pattern examples attached to positive or/and negative semantics. From these examples, probabilities are inferred using a Bayesian network and a Dirichlet model. This enables to links user interest to a specific similarity model between graph patterns. According to the current state of learning, semantic posterior probabilities are updated for all possible graph patterns so that similar spatio-temporal phenomena can be recognized and retrieved from the image sequence. Few experiments performed on a multi-spectral SPOT image sequence illustrate the proposed spatio-temporal recognition method

    Integrative IRT for documentation and interpretation of archaeological structures

    Get PDF
    The documentation of built heritage involves tangible and intangible features. Several morphological and metric aspects of architectural structures are acquired throughout a massive data capture system, such as the Terrestrial Laser Scanner (TLS) and the Structure from Motion (SfM) technique. They produce models that give information about the skin of architectural organism. Infrared Thermography (IRT) is one of the techniques used to investigate what is beyond the external layer. This technology is particularly significant in the diagnostics and conservation of the built heritage. In archaeology, the integration of data acquired through different sensors improves the analysis and the interpretation of findings that are incomplete or transformed. Starting from a topographic and photogrammetric survey, the procedure here proposed aims to combine the bidimensional IRT data together with the 3D point cloud. This system helps to overcome the Field of View (FoV) of each IRT image and provides a three-dimensional reading of the thermal behaviour of the object. This approach is based on the geometric constraints of the pair of RGB-IR images coming from two different sensors mounted inside a bi-camera commercial device. Knowing the approximate distance between the two sensors, and making the necessary simplifications allowed by the low resolution of the thermal sensor, we projected the colour of the IR images to the RGB point cloud. The procedure was applied is the so-called Nymphaeum of Egeria, an archaeological structure in the Caffarella Park (Rome, Italy), which is currently part of the Appia Antica Regional Park

    Kohonen-Based Credal Fusion of Optical and Radar Images for Land Cover Classification

    Get PDF
    International audienceThis paper presents a Credal algorithm to perform land cover classification from a pair of optical and radar remote sensing images. SAR (Synthetic Aperture Radar) /optical multispectral information fusion is investigated in this study for making the joint classification. The approach consists of two main steps: 1) relevant features extraction applied to each sensor in order to model the sources of information and 2) a Kohonen map-based estimation of Basic Belief Assignments (BBA) dedicated to heterogeneous data. This framework deals with co-registered images and is able to handle complete optical data as well as optical data affected by missing value due to the presence of clouds and shadows during observation. A pair of SPOT-5 and RADARSAT-2 real images is used in the evaluation, and the proposed experiment in a farming area shows very promising results in terms of classification accuracy and missing optical data reconstruction when some data are hidden by clouds

    Applicability of satellite remote sensing for detection and monitoring of coal strip mining activities

    Get PDF
    The author has identified the following significant results. Large areas covered by orbital photography allows the user to estimate the acreage of strip mining activity from a few frames. Infrared photography both in color and in black and white transparencies was found to be the best suited for this purpose
    • …
    corecore