1,413 research outputs found

    Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search

    Full text link
    Target search with unmanned aerial vehicles (UAVs) is relevant problem to many scenarios, e.g., search and rescue (SaR). However, a key challenge is planning paths for maximal search efficiency given flight time constraints. To address this, we propose the Obstacle-aware Adaptive Informative Path Planning (OA-IPP) algorithm for target search in cluttered environments using UAVs. Our approach leverages a layered planning strategy using a Gaussian Process (GP)-based model of target occupancy to generate informative paths in continuous 3D space. Within this framework, we introduce an adaptive replanning scheme which allows us to trade off between information gain, field coverage, sensor performance, and collision avoidance for efficient target detection. Extensive simulations show that our OA-IPP method performs better than state-of-the-art planners, and we demonstrate its application in a realistic urban SaR scenario.Comment: Paper accepted for International Conference on Robotics and Automation (ICRA-2019) to be held at Montreal, Canad

    Robotic 3D Plant Perception and Leaf Probing with Collision-Free Motion Planning for Automated Indoor Plant Phenotyping

    Get PDF
    Various instrumentation devices for plant physiology study such as chlorophyll fluorimeter and Raman spectrometer require leaf probing with accurate probe positioning and orientation with respect to leaf surface. In this work, we aimed to automate this process with a Kinect V2 sensor, a high-precision 2D laser profilometer, and a 6-axis robotic manipulator in a high-throughput manner. The relatively wide field of view and high resolution of Kinect V2 allowed rapid capture of the full 3D environment in front of the robot. Given the number of plants, the location and size of each plant were estimated by K-means clustering. A real-time collision-free motion planning framework based on Probabilistic Roadmap was adopted to maneuver the robotic manipulator without colliding with the plants. Each plant was scanned from top with the short-range profilometer to obtain a high-precision point cloud where potential leaf clusters were extracted by region growing segmentation. Each leaf segment was further partitioned into small patches by Voxel Cloud Connectivity Segmentation. Only the small patches with low root mean square values of plane fitting were used to compute probing poses. To evaluate probing accuracy, a square surface was scanned at various angles and its centroid was probed perpendicularly with a probing position error of 1.5 mm and a probing angle error of 0.84 degrees on average. Our growth chamber leaf probing experiment showed that the average motion planning time was 0.4 seconds and the average traveled distance of tool center point was 1 meter

    Probabilistic stable motion planning with stability uncertainty for articulated vehicles on challenging terrains

    Full text link
    © 2015, Springer Science+Business Media New York. A probabilistic stable motion planning strategy applicable to reconfigurable robots is presented in this paper. The methodology derives a novel statistical stability criterion from the cumulative distribution of a tip-over metric. The measure is dynamically updated with imprecise terrain information, localization and robot kinematics to plan safety-constrained paths which simultaneously allow the widest possible visibility of the surroundings by simultaneously assuming highest feasible vantage robot configurations. The proposed probabilistic stability metric allows more conservative poses through areas with higher levels of uncertainty, while avoiding unnecessary caution in poses assumed at well-known terrain sections. The implementation with the well known grid based A* algorithm and also a sampling based RRT planner are presented. The validity of the proposed approach is evaluated with a multi-tracked robot fitted with a manipulator arm and a range camera using two challenging elevation terrains data sets: one obtained whilst operating the robot in a mock-up urban search and rescue arena, and the other from a publicly available dataset of a quasi-outdoor rover testing facility

    Differentiable Algorithm Networks for Composable Robot Learning

    Full text link
    This paper introduces the Differentiable Algorithm Network (DAN), a composable architecture for robot learning systems. A DAN is composed of neural network modules, each encoding a differentiable robot algorithm and an associated model; and it is trained end-to-end from data. DAN combines the strengths of model-driven modular system design and data-driven end-to-end learning. The algorithms and models act as structural assumptions to reduce the data requirements for learning; end-to-end learning allows the modules to adapt to one another and compensate for imperfect models and algorithms, in order to achieve the best overall system performance. We illustrate the DAN methodology through a case study on a simulated robot system, which learns to navigate in complex 3-D environments with only local visual observations and an image of a partially correct 2-D floor map.Comment: RSS 2019 camera ready. Video is available at https://youtu.be/4jcYlTSJF4

    Leonardo Drone Contest Autonomous Drone Competition: Overview, Results, and Lessons Learned from Politecnico di Milano Team

    Get PDF
    In this paper, the Politecnico di Milano solutions proposed for the Leonardo Drone Contest (LDC) are presented. The Leonardo Drone Contest is an annual autonomous drone competition among universities, which has already seen the conclusion of its second edition. In each edition, the participating teams were asked to design and build an autonomous multicopter, capable of accomplishing complex tasks in an indoor urban-like environment. To reach this goal, the designed systems should be capable of navigating in a Global Navigation Satellite System (GNSS)-denied environment with autonomous decision making, online planning and collision avoidance capabilities. In this light, the authors describe the first two editions of the competition, i.e., their rules, objectives and overview of the proposed solutions. While the first edition is presented as relevant for the experience and takeaways acquired from it, the second edition solution is analyzed in detail, providing both the simulation and experimental results obtained

    Motion planning in dynamic environments using context-aware human trajectory prediction

    Get PDF
    Over the years, the separate fields of motion planning, mapping, and human trajectory prediction have advanced considerably. However, the literature is still sparse in providing practical frameworks that enable mobile manipulators to perform whole-body movements and account for the predicted motion of moving obstacles. Previous optimisation-based motion planning approaches that use distance fields have suffered from the high computational cost required to update the environment representation. We demonstrate that GPU-accelerated predicted composite distance fields significantly reduce the computation time compared to calculating distance fields from scratch. We integrate this technique with a complete motion planning and perception framework that accounts for the predicted motion of humans in dynamic environments, enabling reactive and pre-emptive motion planning that incorporates predicted motions. To achieve this, we propose and implement a novel human trajectory prediction method that combines intention recognition with trajectory optimisation-based motion planning. We validate our resultant framework on a real-world Toyota Human Support Robot (HSR) using live RGB-D sensor data from the onboard camera. In addition to providing analysis on a publicly available dataset, we release the Oxford Indoor Human Motion (Oxford-IHM) dataset and demonstrate state-of-the-art performance in human trajectory prediction. The Oxford-IHM dataset is a human trajectory prediction dataset in which people walk between regions of interest in an indoor environment. Both static and robot-mounted RGB-D cameras observe the people while tracked with a motion-capture system

    Task-driven active sensing framework applied to leaf probing

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This article presents a new method for actively exploring a 3D workspace with the aim of localizing relevant regions for a given task. Our method encodes the exploration route in a multi-layer occupancy grid map. This map, together with a multiple-view estimator and a maximum-information-gain gathering approach, incrementally provide a better understanding of the scene until reaching the task termination criterion. This approach is designed to be applicable to any task entailing 3D object exploration where some previous knowledge of its approximate shape is available. Its suitability is demonstrated here for a leaf probing task using an eye-in-hand arm configuration in the context of a phenotyping application (leaf probing).Peer ReviewedPostprint (author's final draft
    • …
    corecore