39 research outputs found

    Multi-target detection and recognition by UAVs using online POMDPs

    Get PDF
    This paper tackles high-level decision-making techniques for robotic missions, which involve both active sensing and symbolic goal reaching, under uncertain probabilistic environments and strong time constraints. Our case study is a POMDP model of an online multi-target detection and recognition mission by an autonomous UAV.The POMDP model of the multi-target detection and recognition problem is generated online from a list of areas of interest, which are automatically extracted at the beginning of the flight from a coarse-grained high altitude observation of the scene. The POMDP observation model relies on a statistical abstraction of an image processing algorithm's output used to detect targets. As the POMDP problem cannot be known and thus optimized before the beginning of the flight, our main contribution is an ``optimize-while-execute'' algorithmic framework: it drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints. We present new results from real outdoor flights and SAIL simulations, which highlight both the benefits of using POMDPs in multi-target detection and recognition missions, and of our`optimize-while-execute'' paradigm

    Planning for perception and perceiving for decision: POMDP-like online target detection and recognition for autonomous UAVs

    Get PDF
    This paper studies the use of POMDP-like techniques to tackle an online multi-target detection and recognition mission by an autonomous rotorcraft UAV. Such robotics missions are complex and too large to be solved off-line, and acquiring information about the environment is as important as achieving some symbolic goals. The POMDP model deals in a single framework with both perception actions (controlling the camera's view angle), and mission actions (moving between zones and flight levels, landing) needed to achieve the goal of the mission, i.e. landing in a zone containing a car whose model is recognized as a desired target model with sufficient belief. We explain how we automatically learned the probabilistic observation POMDP model from statistical analysis of the image processing algorithm used on-board the UAV to analyze objects in the scene. We also present our "optimize-while-execute" framework, which drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints, reasoning about the future possible execution states of the robotic system. Finally, we present experimental results, which demonstrate that Artificial Intelligence techniques like POMDP planning can be successfully applied in order to automatically control perception and mission actions hand-in-hand for complex time-constrained UAV missions

    Guidance of Autonomous Amphibious Vehicles for Flood Rescue Support

    Get PDF
    We develop a path-planning algorithm to guide autonomous amphibious vehicles (AAVs) for flood rescue support missions. Specifically, we develop an algorithm to control multiple AAVs to reach/rescue multiple victims (also called targets) in a flood scenario in 2D, where the flood water flows across the scene and the targets move (drifted by the flood water) along the flood stream. A target is said to be rescued if an AAV lies within a circular region of a certain radius around the target. The goal is to control the AAVs such that each target gets rescued while optimizing a certain performance objective. The algorithm design is based on the theory of partially observable Markov decision process (POMDP). In practice, POMDP problems are hard to solve exactly, so we use an approximation method called nominal belief-state optimization (NBO). We compare the performance of the NBO approach with a greedy approach

    Optimal control approaches for consensus and path planning in multi-agent systems

    Get PDF
    Optimal control is one of the most powerful, important and advantageous topics in control engineering. The two challenges in every optimal control problem are defining the proper cost function and obtaining the best method to minimize it. In this study, innovative optimal control approaches are developed to solve the two problems of consensus and path planning in multi-agent systems (MASs). The consensus problem for general Linear-Time Invariant systems is solved by implementing an inverse optimal control approach which enables us to start by deriving a control law based on the stability and optimality condition and then according to the derived control define the cost function. We will see that this method in which the cost function is not specified a priori as the conventional optimal control design has the benefit that the resulting control law is guaranteed to be both stabilizing and optimal. Three new theorems in related linear algebra are developed to enable us to use the algorithm for all the general LTI systems. The designed optimal control is distributed and only needs local neighbor-to-neighbor information based on the communication topology to make the agents achieve consensus and track a desired trajectory. Path planning problem is solved for a group are Unmanned Aerial Vehicles (UAVs) that are assigned to track the fronts of a fires in a process of wildfire management. We use Partially Observable Markov Decision Process (POMDP) in order to minimize the cost function that is defined according to the tracking error. Here the challenge is designing the algorithm such that (1) the UAVs are able to make decisions autonomously on which fire front to track and (2) they are able to track the fire fronts which evolve over time in random directions. We will see that by defining proper models, the designed algorithms provides real-time calculation of control variables which enables the UAVs to track the fronts and find their way autonomously. Furthermore, by implementing Nominal Belief-state Optimization (NBO) method, the dynamic constraints of the UAVs is considered and challenges such as collision avoidance is addressed completely in the context of POMDP

    Learning-based wildfire tracking with unmanned aerial vehicles

    Get PDF
    This project attempts to design a path planning algorithm for a group of unmanned aerial vehicles (UAVs) to track multiple spreading wildfire zones on a wildland. Due to the physical limitations of UAVs, the wildland is partially observable. Thus, the fire spreading is difficult to model. An online training regression neural network using real-time UAV observation data is implemented for fire front positions prediction. The wildfire tracking with UAVs path planning algorithm is proposed by Q-learning. Various practical factors are considered by designing an appropriate cost function which can describe the tracking problem, such as importance of the moving targets, field of view of UAVs, spreading speed of fire zones, collision avoidance between UAVs, obstacle avoidance, and maximum information collection. To improve the computation efficiency, a vertices-based fire line feature extraction is used to reduce the fire line targets. Simulation results under various wind conditions validate the fire prediction accuracy and UAV tracking performance.Includes bibliographical references
    corecore