144 research outputs found

    Multi-target detection and recognition by UAVs using online POMDPs

    Get PDF
    This paper tackles high-level decision-making techniques for robotic missions, which involve both active sensing and symbolic goal reaching, under uncertain probabilistic environments and strong time constraints. Our case study is a POMDP model of an online multi-target detection and recognition mission by an autonomous UAV.The POMDP model of the multi-target detection and recognition problem is generated online from a list of areas of interest, which are automatically extracted at the beginning of the flight from a coarse-grained high altitude observation of the scene. The POMDP observation model relies on a statistical abstraction of an image processing algorithm's output used to detect targets. As the POMDP problem cannot be known and thus optimized before the beginning of the flight, our main contribution is an ``optimize-while-execute'' algorithmic framework: it drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints. We present new results from real outdoor flights and SAIL simulations, which highlight both the benefits of using POMDPs in multi-target detection and recognition missions, and of our`optimize-while-execute'' paradigm

    Planning for perception and perceiving for decision: POMDP-like online target detection and recognition for autonomous UAVs

    Get PDF
    This paper studies the use of POMDP-like techniques to tackle an online multi-target detection and recognition mission by an autonomous rotorcraft UAV. Such robotics missions are complex and too large to be solved off-line, and acquiring information about the environment is as important as achieving some symbolic goals. The POMDP model deals in a single framework with both perception actions (controlling the camera's view angle), and mission actions (moving between zones and flight levels, landing) needed to achieve the goal of the mission, i.e. landing in a zone containing a car whose model is recognized as a desired target model with sufficient belief. We explain how we automatically learned the probabilistic observation POMDP model from statistical analysis of the image processing algorithm used on-board the UAV to analyze objects in the scene. We also present our "optimize-while-execute" framework, which drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints, reasoning about the future possible execution states of the robotic system. Finally, we present experimental results, which demonstrate that Artificial Intelligence techniques like POMDP planning can be successfully applied in order to automatically control perception and mission actions hand-in-hand for complex time-constrained UAV missions

    POMDP-based online target detection and recognition for autonomous UAVs

    Get PDF
    This paper presents a target detection and recognition mission by an autonomous Unmanned Aerial Vehicule (UAV) modeled as a Partially Observable Markov Decision Process (POMDP). The POMDP model deals in a single framework with both perception actions (controlling the camera's view angle), and mission actions (moving between zones and flight levels, landing) needed to achieve the goal of the mission, i.e. landing in a zone containing a car whose model is recognized as a desired target model with sufficient belief. We explain how we automatically learned the probabilistic observation POMDP model from statistical analysis of the image processing algorithm used on-board the UAV to analyze objects in the scene. We also present our "optimize-while-execute" framework, which drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints, reasoning about the future possible execution states of the robotic system. Finally, we present experimental results, which demonstrate that Artificial Intelligence techniques like POMDP planning can be successfully applied in order to automatically control perception and mission actions hand-in-hand for complex time-constrained UAV missions

    A Decision-theoretic Approach to Detection-based Target Search with a UAV

    Full text link
    Search and rescue missions and surveillance require finding targets in a large area. These tasks often use unmanned aerial vehicles (UAVs) with cameras to detect and move towards a target. However, common UAV approaches make two simplifying assumptions. First, they assume that observations made from different heights are deterministically correct. In practice, observations are noisy, with the noise increasing as the height used for observations increases. Second, they assume that a motion command executes correctly, which may not happen due to wind and other environmental factors. To address these, we propose a sequential algorithm that determines actions in real time based on observations, using partially observable Markov decision processes (POMDPs). Our formulation handles both observations and motion uncertainty and errors. We run offline simulations and learn a policy. This policy is run on a UAV to find the target efficiently. We employ a novel compact formulation to represent the coordinates of the drone relative to the target coordinates. Our POMDP policy finds the target up to 3.4 times faster when compared to a heuristic policy.Comment: Published in IEEE IROS 2017. 6 page

    Coordinating human-UAV teams in disaster response

    No full text
    We consider a disaster response scenario where emergency responders have to complete rescue tasks in dynamic and uncertain environment with the assistance of multiple UAVs to collect information about the disaster space. To capture the uncertainty and partial observability of the domain, we model this problem as a POMDP. However, the resulting model is computationally intractable and cannot be solved by most existing POMDP solvers due to the large state and action spaces. By exploiting the problem structure we propose a novel online planning algorithm to solve this model. Specifically, we generate plans for the responders based on Monte-Carlo simulations and compute actions for the UAVs according to the value of information. Our empirical results confirm that our algorithm significantly outperforms the state-of-the-art both in time and solution quality

    Learning-based wildfire tracking with unmanned aerial vehicles

    Get PDF
    This project attempts to design a path planning algorithm for a group of unmanned aerial vehicles (UAVs) to track multiple spreading wildfire zones on a wildland. Due to the physical limitations of UAVs, the wildland is partially observable. Thus, the fire spreading is difficult to model. An online training regression neural network using real-time UAV observation data is implemented for fire front positions prediction. The wildfire tracking with UAVs path planning algorithm is proposed by Q-learning. Various practical factors are considered by designing an appropriate cost function which can describe the tracking problem, such as importance of the moving targets, field of view of UAVs, spreading speed of fire zones, collision avoidance between UAVs, obstacle avoidance, and maximum information collection. To improve the computation efficiency, a vertices-based fire line feature extraction is used to reduce the fire line targets. Simulation results under various wind conditions validate the fire prediction accuracy and UAV tracking performance.Includes bibliographical references

    Planning for perception and perceiving for decision: POMDP-like online optimization in large complex robotics missions

    Get PDF
    This ongoing phD work aims at proposing a unified framework to optimize both perception and task planning using extended Partially Observable Markov Decision Processes (POMDPs). Targeted applications are large complex aerial robotics missions where the problem is too large to be solved off-line, and acquiring information about the environment is as important as achieving some symbolic goals. Challenges of this work include: (1) optimizing a dual objective in a single decision-theoretic framework, i.e. environment perception and goal achievement ; (2) properly dealing with action preconditions on belief states in order to guarantee safety constraints or physical limitations, what is crucial in aerial robotics ; (3) modeling the symbolic output of image processing algorithms as input of the POMDP's observation function ; (4) parallel optimization and execution of POMDP policies in constrained time. A global view of each of these topics are presented, as well as some ongoing experimental results

    Searching and tracking people with cooperative mobile robots

    Get PDF
    The final publication is available at link.springer.comSocial robots should be able to search and track people in order to help them. In this paper we present two different techniques for coordinated multi-robot teams for searching and tracking people. A probability map (belief) of a target person location is maintained, and to initialize and update it, two methods were implemented and tested: one based on a reinforcement learning algorithm and the other based on a particle filter. The person is tracked if visible, otherwise an exploration is done by making a balance, for each candidate location, between the belief, the distance, and whether close locations are explored by other robots of the team. The validation of the approach was accomplished throughout an extensive set of simulations using up to five agents and a large amount of dynamic obstacles; furthermore, over three hours of real-life experiments with two robots searching and tracking were recorded and analysed.Peer ReviewedPostprint (author's final draft
    corecore