9 research outputs found

    Cooperative Pursuit with Multi-Pursuer and One Faster Free-moving Evader

    Full text link
    This paper addresses a multi-pursuer single-evader pursuit-evasion game where the free-moving evader moves faster than the pursuers. Most of the existing works impose constraints on the faster evader such as limited moving area and moving direction. When the faster evader is allowed to move freely without any constraint, the main issues are how to form an encirclement to trap the evader into the capture domain, how to balance between forming an encirclement and approaching the faster evader, and what conditions make the capture possible. In this paper, a distributed pursuit algorithm is proposed to enable pursuers to form an encirclement and approach the faster evader. An algorithm that balances between forming an encirclement and approaching the faster evader is proposed. Moreover, sufficient capture conditions are derived based on the initial spatial distribution and the speed ratios of the pursuers and the evader. Simulation and experimental results on ground robots validate the effectiveness and practicability of the proposed method

    Optimal Strategy Imitation Learning from Differential Games

    Get PDF
    The ability of a vehicle to navigate safely through any environment relies on its driver having an accurate sense of the future positions and goals of other vehicles on the road. A driver does not navigate around where an agent is, but where it is going to be. To avoid collisions, autonomous vehicles should be equipped with the ability to to derive appropriate controls using future estimations for other vehicles, pedestrians, or otherwise intentionally moving agents in a manner similar to or better than human drivers. Differential game theory provides one approach to generate a control strategy by modeling two players with opposing goals. Environments faced by autonomous vehicles, such as merging onto a freeway, are complex, but they can be modeled and solved as a differential game using discrete approximations; these games yield an optimal control policy for both players and can be used to model adversarial driving scenarios rather than average ones, so that autonomous vehicles will be safer on the road in more situations. Further, discrete approximations of solutions to complex games that are computationally tractable and provably asymptotically optimal have been developed, but may not produce usable results in an online fashion. To retrieve an efficient, continuous control policy, we use deep imitation learning to model the discrete approximation of a differential game solution. We successfully learn the policy generated for two games of different complexity, a fence escape and merging game, and show that the imitated policy generates control inputs faster than the differential game generated policy

    Heuristic artificial bee colony algorithm for solving the Homicidal Chauffeur differential game

    Get PDF
    In this paper, we consider the Homicidal Chauffeur (HC) problem as an interesting and practical differential game. At first, we introduce a bilevel optimal control problem (BOCP) and prove that a saddle point solution for this game exists if and only if this BOCP has an optimal solution in which the optimal value of the objective function is equal to 11. Then, BOCP is discretized and converted to a nonlinear bilevel programming problem. Finally, an Artificial Bee Colony (ABC) algorithm is used for solving this problem, in which the lower-level problem will be considered as a constraint and solved by an NLP-solver. Finally, to demonstrate the effectiveness of the presented method, various cases of HC problem are solved and the simulation results are reported

    Air Force Institute of Technology Research Report 2015

    Get PDF
    This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems Engineering and Management, Operational Sciences, Mathematics, Statistics and Engineering Physics

    Air Force Institute of Technology Research Report 2015

    Get PDF
    This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems Engineering and Management, Operational Sciences, Mathematics, Statistics and Engineering Physics

    Advancements in Adversarially-Resilient Consensus and Safety-Critical Control for Multi-Agent Networks

    Full text link
    The capabilities of and demand for complex autonomous multi-agent systems, including networks of unmanned aerial vehicles and mobile robots, are rapidly increasing in both research and industry settings. As the size and complexity of these systems increase, dealing with faults and failures becomes a crucial element that must be accounted for when performing control design. In addition, the last decade has witnessed an ever-accelerating proliferation of adversarial attacks on cyber-physical systems across the globe. In response to these challenges, recent years have seen an increased focus on resilience of multi-agent systems to faults and adversarial attacks. Broadly speaking, resilience refers to the ability of a system to accomplish control or performance objectives despite the presence of faults or attacks. Ensuring the resilience of cyber-physical systems is an interdisciplinary endeavor that can be tackled using a variety of methodologies. This dissertation approaches the resilience of such systems from a control-theoretic viewpoint and presents several novel advancements in resilient control methodologies. First, advancements in resilient consensus techniques are presented that allow normally-behaving agents to achieve state agreement in the presence of adversarial misinformation. Second, graph theoretic tools for constructing and analyzing the resilience of multi-agent networks are derived. Third, a method for resilient broadcasting vector-valued information from a set of leaders to a set of followers in the presence of adversarial misinformation is presented, and these results are applied to the problem of propagating entire knowledge of time-varying Bezier-curve-based trajectories from leaders to followers. Finally, novel results are presented for guaranteeing safety preservation of heterogeneous control-affine multi-agent systems with sampled-data dynamics in the presence of adversarial agents.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168102/1/usevitch_1.pd
    corecore