2,438 research outputs found
AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming
The combination of aerial survey capabilities of Unmanned Aerial Vehicles
with targeted intervention abilities of agricultural Unmanned Ground Vehicles
can significantly improve the effectiveness of robotic systems applied to
precision agriculture. In this context, building and updating a common map of
the field is an essential but challenging task. The maps built using robots of
different types show differences in size, resolution and scale, the associated
geolocation data may be inaccurate and biased, while the repetitiveness of both
visual appearance and geometric structures found within agricultural contexts
render classical map merging techniques ineffective. In this paper we propose
AgriColMap, a novel map registration pipeline that leverages a grid-based
multimodal environment representation which includes a vegetation index map and
a Digital Surface Model. We cast the data association problem between maps
built from UAVs and UGVs as a multimodal, large displacement dense optical flow
estimation. The dominant, coherent flows, selected using a voting scheme, are
used as point-to-point correspondences to infer a preliminary non-rigid
alignment between the maps. A final refinement is then performed, by exploiting
only meaningful parts of the registered maps. We evaluate our system using real
world data for 3 fields with different crop species. The results show that our
method outperforms several state of the art map registration and matching
techniques by a large margin, and has a higher tolerance to large initial
misalignments. We release an implementation of the proposed approach along with
the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory
The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop
Overview of Environment Perception for Intelligent Vehicles
This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The
state-of-the-art algorithms and modeling methods for intelligent
vehicles are given, with a summary of their pros and cons. A
special attention is paid to methods for lane and road detection,
traffic sign recognition, vehicle tracking, behavior analysis, and
scene understanding. In addition, we provide information about
datasets, common performance analysis, and perspectives on
future research directions in this area
A System Implementation and Evaluation of a Cooperative Fusion and Tracking Algorithm based on a Gaussian Mixture PHD Filter
This paper focuses on a real system implementation, analysis, and evaluation of a cooperative sensor fusion algorithm based on a Gaussian Mixture Probability Hypothesis Density (GM-PHD) filter, using simulated and real vehicles endowed with automotive-grade sensors. We have extended our previously presented cooperative sensor fusion algorithm with a fusion weight optimization method and implemented it on a vehicle that we denote as the ego vehicle. The algorithm fuses information obtained from one or more vehicles located within a certain range (that we call cooperative), which are running a multi-object tracking PHD filter, and which are sharing their object estimates. The algorithm is evaluated on two Citroen C-ZERO prototype vehicles equipped with Mobileye cameras for object tracking and lidar sensors from which the ground truth positions of the tracked objects are extracted. Moreover, the algorithm is evaluated in simulation using simulated C-ZERO vehicles and simulated Mobileye cameras. The ground truth positions of tracked objects are in this case provided by the simulator. Multiple experimental runs are conducted in both simulated and real-world conditions in which a few legacy vehicles were tracked. Results show that the cooperative fusion algorithm allows for extending the sensing field of view, while keeping the tracking accuracy and errors similar to the case in which the vehicles act alone
- …