78 research outputs found

    Resilience Model for Teams of Autonomous Unmanned Aerial Vehicles (UAV) Executing Surveillance Missions

    Get PDF
    Teams of low-cost Unmanned Aerial Vehicles (UAVs) have gained acceptance as an alternative for cooperatively searching and surveilling terrains. These UAVs are assembled with low-reliability components, so unit failures are possible. Losing UAVs to failures decreases the team\u27s coverage efficiency and impacts communication, given that UAVs are also communication nodes. Such is the case of a Flying Ad Hoc Network (FANET), where the failure of a communication node may isolate segments of the network covering several nodes. The main goal of this study is to develop a resilience model that would allow us to analyze the effects of individual UAV failures on the team\u27s performance to improve the team\u27s resilience. The proposed solution models and simulates the UAV team using Agent-Based Modeling and Simulation. UAVs are modeled as autonomous agents, and the searched terrain as a two-dimensional M x N grid. Communication between agents permits having the exact data on the transit and occupation of all cells in real time. Such communication allows the UAV agents to estimate the best alternatives to move within the grid and know the exact number of all agents\u27 visits to the cells. Each UAV is simulated as a hobbyist, fixed-wing airplane equipped with a generic set of actuators and a generic controller. Individual UAV failures are simulated following reliability Fault Trees. Each affected UAV is disabled and eliminated from the pool of active units. After each unit failure, the system generates a new topology. It produces a set of minimum-distance trees for each node (UAV) in the grid. The new trees will thus depict the rearrangement links as required after a node failure or if changes occur in the topology due to node movement. The model should generate parameters such as the number and location of compromised nodes, performance before and after the failure, and the estimated time of restitution needed to model the team\u27s resilience. The study addresses three research goals: identifying appropriate tools for modeling UAV scenarios, developing a model for assessing UAVs team resilience that overcomes previous studies\u27 limitations, and testing the model through multiple simulations. The study fills a gap in the literature as previous studies focus on system communication disruptions (i.e., node failures) without considering UAV unit reliability. This consideration becomes critical as using small, low-cost units prone to failure becomes widespread

    Managing distributed situation awareness in a team of agents

    Get PDF
    The research presented in this thesis investigates the best ways to manage Distributed Situation Awareness (DSA) for a team of agents tasked to conduct search activity with limited resources (battery life, memory use, computational power, etc.). In the first part of the thesis, an algorithm to coordinate agents (e.g., UAVs) is developed. This is based on Delaunay triangulation with the aim of supporting efficient, adaptable, scalable, and predictable search. Results from simulation and physical experiments with UAVs show good performance in terms of resources utilisation, adaptability, scalability, and predictability of the developed method in comparison with the existing fixed-pattern, pseudorandom, and hybrid methods. The second aspect of the thesis employs Bayesian Belief Networks (BBNs) to define and manage DSA based on the information obtained from the agents' search activity. Algorithms and methods were developed to describe how agents update the BBN to model the system’s DSA, predict plausible future states of the agents’ search area, handle uncertainties, manage agents’ beliefs (based on sensor differences), monitor agents’ interactions, and maintains adaptable BBN for DSA management using structural learning. The evaluation uses environment situation information obtained from agents’ sensors during search activity, and the results proved superior performance over well-known alternative methods in terms of situation prediction accuracy, uncertainty handling, and adaptability. Therefore, the thesis’s main contributions are (i) the development of a simple search planning algorithm that combines the strength of fixed-pattern and pseudorandom methods with resources utilisation, scalability, adaptability, and predictability features; (ii) a formal model of DSA using BBN that can be updated and learnt during the mission; (iii) investigation of the relationship between agents search coordination and DSA management

    Assured Autonomy in Multiagent Systems with Safe Learning

    Get PDF
    Autonomous multiagent systems is an area that is currently receiving increasing attention in the communities of robotics, control systems, and machine learning (ML) and artificial intelligence (AI). It is evident today, how autonomous robots and vehicles can help us shape our future. Teams of robots are being used to help identify and rescue survivors in case of a natural disaster for instance. There we understand that we are talking minutes and seconds that can decide whether you can save a person's life or not. This example portrays not only the value of safety but also the significance of time, in planning complex missions with autonomous agents. This thesis aims to develop a generic, composable framework for a multiagent system (of robots or vehicles), which can safely carry out time-critical missions in a distributed and autonomous fashion. The goal is to provide formal guarantees on both safety and finite-time mission completion in real time, thus, to answer the question: “how trustworthy is the autonomy of a multi-robot system in a complex mission?” We refer to this notion of autonomy in multiagent systems as assured or trusted autonomy, which is currently a very sought-after area of research, thanks to its enormous applications in autonomous driving for instance. There are two interconnected components of this thesis. In the first part, using tools from control theory (optimal control), formal methods (temporal logic and hybrid automata), and optimization (mixed-integer programming), we propose multiple variants of (almost) realtime planning algorithms, which provide formal guarantees on safety and finite-time mission completion for a multiagent system in a complex mission. Our proposed framework is hybrid, distributed, and inherently composable, as it uses a divide-and-conquer approach for planning a complex mission, by breaking it down into several sub-tasks. This approach enables us to implement the resulting algorithms on robots with limited computational power, while still achieving close to realtime performance. We validate the efficacy of our methods on multiple use cases such as autonomous search and rescue with a team of unmanned aerial vehicles (UAVs) and ground robots, autonomous aerial grasping and navigation, UAV-based surveillance, and UAV-based inspection tasks in industrial environments. In the second part, our goal is to translate and adapt these developed algorithms to safely learn actions and policies for robots in dynamic environments, so that they can accomplish their mission even in the presence of uncertainty. To accomplish this goal, we introduce the ideas of self-monitoring and self-correction for agents using hybrid automata theory and model predictive control (MPC). Self-monitoring and self-correction refer to the problems in autonomy where the autonomous agents monitor their performance, detect deviations from normal or expected behavior, and learn to adjust both the description of their mission/task and their performance online, to maintain the expected behavior and performance. In this setting, we propose a formal and composable notion of safety and adaptation for autonomous multiagent systems, which we refer to as safe learning. We revisit one of the earlier use cases to demonstrate the capabilities of our approach for a team of autonomous UAVs in a surveillance and search and rescue mission scenario. Despite portraying results mainly for UAVs in this thesis, we argue that the proposed planning framework is transferable to any team of autonomous agents, under some realistic assumptions. We hope that this research will serve several modern applications of public interest, such as autopilots and flight controllers, autonomous driving systems (ADS), autonomous UAV missions such as aerial grasping and package delivery with drones etc., by improving upon the existing safety of their autonomous operation

    On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters

    Get PDF
    This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p

    Mission programming for flying ensembles: combining planning with self-organization

    Get PDF
    The application of autonomous mobile robots can improve many situations of our daily lives. Robots can enhance working conditions, provide innovative techniques for different research disciplines, and support rescue forces in an emergency. In particular, flying robots have already shown their potential in many use-cases when cooperating in ensembles. Exploiting this potential requires sophisticated measures for the goal-oriented, application-specific programming of flying ensembles and the coordinated execution of so defined programs. Because different goals require different robots providing different capabilities, several software approaches emerged recently that focus on specifically designed robots. These approaches often incorporate autonomous planning, scheduling, optimization, and reasoning attributable to classic artificial intelligence. This allows for the goal-oriented instruction of ensembles, but also leads to inefficiencies if ensembles grow large or face uncertainty in the environment. By leaving the detailed planning of executions to individuals and foregoing optimality and goal-orientation, the selforganization paradigm can compensate for these drawbacks by scalability and robustness. In this thesis, we combine the advantageous properties of autonomous planning with that of self-organization in an approach to Mission Programming for Flying Ensembles. Furthermore, we overcome the current way of thinking about how mobile robots should be designed. Rather than assuming fixed-design robots, we assume that robots are modifiable in terms of their hardware at run-time. While using such robots enables their application in many different use cases, it also requires new software approaches for dealing with this flexible design. The contributions of this thesis thus are threefold. First, we provide a layered reference architecture for physically reconfigurable robot ensembles. Second, we provide a solution for programming missions for ensembles consisting of such robots in a goal-oriented fashion that provides measures for instructing individual robots or entire ensembles as desired in the specific use case. Third, we provide multiple self-organization mechanisms to deal with the system’s flexible design while executing such missions. Combining different self-organization mechanisms ensures that ensembles satisfy the static requirements of missions. We provide additional self-organization mechanisms for coordinating the execution in ensembles ensuring they meet the dynamic requirements of a mission. Furthermore, we provide a solution for integrating goal-oriented swarm behavior into missions using a general pattern we have identified for trajectory-modification-based swarm behavior. Using that pattern, we can modify, quantify, and further process the emergent effect of varying swarm behavior in a mission by changing only the parameters of its implementation. We evaluate results theoretically and practically in different case studies by deploying our techniques to simulated and real hardware.Der Einsatz von autonomen mobilen Robotern kann viele Abläufe unseres täglichen Lebens erleichtern. Ihr Einsatz kann Arbeitsbedingungen verbessern, als innovative Technik für verschiedene Forschungsdisziplinen dienen oder Rettungskräfte im Einsatz unterstützen. Insbesondere Flugroboter haben ihr Potenzial bereits in vielerlei Anwendungsfällen gezeigt, gerade wenn mehrere in Ensembles eingesetzt werden. Das Potenzial fliegender Ensembles zielgerichtet und anwendungsspezifisch auszuschöpfen erfordert ausgefeilte Programmiermethoden und Koordinierungsverfahren. Zu diesem Zweck sind zuletzt viele unterschiedliche und auf speziell entwickelte Roboter zugeschnittene Softwareansätze entstanden. Diese verwenden oft klassische Planungs-, Scheduling-, Optimierungs- und Reasoningverfahren. Während dies vor allem den zielgerichteten Einsatz von Ensembles ermöglicht, ist es jedoch auch oft ineffizient, wenn die Ensembles größer oder deren Einsatzumgebungen unsicher werden. Die genannten Nachteile können durch das Paradigma der Selbstorganisation kompensiert werden: Falls Anwendungen nicht zwangsläufig auf Optimalität und strikte Zielorientierung ausgelegt sind, kann so Skalierbarkeit und Robustheit im System erreicht werden. In dieser Arbeit werden die vorteilhaften Eigenschaften klassischer Planungstechniken mit denen der Selbstorganisation in einem Ansatz zur Missionsprogrammierung für fliegende Ensembles kombiniert. In der dafür entwickelten Lösung wird von der aktuell etablierten Ansicht einer unveränderlichen Roboterkonstruktion abgewichen. Stattdessen wird die Hardwarezusammenstellung der Roboter als zur Laufzeit modifizierbar angesehen. Der Einsatz solcher Roboter erfordert neue Softwareansätze um mit genannter Flexibilität umgehen zu können. Die hier vorgestellten Beiträge zu diesem Thema lassen sich in drei Punkten zusammenfassen: Erstens wird eine Schichtenarchitektur als Referenz für physikalisch konfigurierbare Roboterensembles vorgestellt. Zweitens wird eine Lösung zur zielorientierten Missions-Programmierung für derartige Ensembles präsentiert, mit der sowohl einzelne Roboter als auch ganze Ensembles instruiert werden können. Drittens werden mehrere Selbstorganisationsmechanismen vorgestellt, die die autonome Ausführung so erstellter Missionen ermöglichen. Durch die Kombination verschiedener Selbstorganisationsmechanismen wird sichergestellt, dass Ensembles die missionsspezifischen Anforderungen erfüllen. Zusätzliche Selbstorganisationsmechanismen ermöglichen die koordinierte Ausführung der Missionen durch die Ensembles. Darüber hinaus bietet diese Lösung die Möglichkeit der Integration zielorientierten Schwarmverhaltens. Durch ein allgemeines algorithmisches Verfahren für auf Trajektorien-Modifikation basierendes Schwarmverhalten können allein durch die Änderung des Parametersatzes unterschiedliche emergente Effekte in einer Mission erzielt, quantifiziert und weiterverarbeitet werden. Zur theoretischen und praktischen Evaluierung der Ergebnisse dieser Arbeit wurden die vorgestellten Techniken in verschiedenen Fallstudien auf simulierter sowie realer Hardware zum Einsatz gebracht

    The Autonomous Attack Aviation Problem

    Get PDF
    An autonomous unmanned combat aerial vehicle (AUCAV) performing an air-to-ground attack mission must make sequential targeting and routing decisions under uncertainty. We formulate a Markov decision process model of this autonomous attack aviation problem (A3P) and solve it using an approximate dynamic programming (ADP) approach. We develop an approximate policy iteration algorithm that implements a least squares temporal difference learning mechanism to solve the A3P. Basis functions are developed and tested for application within the ADP algorithm. The ADP policy is compared to a benchmark policy, the DROP policy, which is determined by repeatedly solving a deterministic orienteering problem as the system evolves. Designed computational experiments of eight problem instances are conducted to compare the two policies with respect to their quality of solution, computational efficiency, and robustness. The ADP policy is superior in 2 of 8 problem instances - those instances with less AUCAV fuel and a low target arrival rate - whereas the DROP policy is superior in 6 of 8 problem instances. The ADP policy outperforms the DROP policy with respect to computational efficiency in all problem instances

    A Multi-Objective Mission Planning Method for AUV Target Search

    Get PDF
    How an autonomous underwater vehicle (AUV) performs fully automated task allocation and achieves satisfactory mission planning effects during the search for potential threats deployed in an underwater space is the focus of the paper. First, the task assignment problem is defined as a traveling salesman problem (TSP) with specific and distinct starting and ending points. Two competitive and non-commensurable optimization goals, the total sailing distance and the turning angle generated by an AUV to completely traverse threat points in the planned order, are taken into account. The maneuverability limitations of an AUV, namely, minimum radius of a turn and speed, are also introduced as constraints. Then, an improved ant colony optimization (ACO) algorithm based on fuzzy logic and a dynamic pheromone volatilization rule is developed to solve the TSP. With the help of the fuzzy set, the ants that have moved along better paths are screened and the pheromone update is performed only on preferred paths so as to enhance pathfinding guidance in the early stage of the ACO algorithm. By using the dynamic pheromone volatilization rule, more volatile pheromones on preferred paths are produced as the number of iterations of the ACO algorithm increases, thus providing an effective way for the algorithm to escape from a local minimum in the later stage. Finally, comparative simulations are presented to illustrate the effectiveness and advantages of the proposed algorithm and the influence of critical parameters is also analyzed and demonstrated.National Natural Science Foundation of China (NSFC) 52101347Foundations for young scientists' cultivation 7900000
    • …
    corecore