7,870 research outputs found

    Advancements in Adversarially-Resilient Consensus and Safety-Critical Control for Multi-Agent Networks

    Full text link
    The capabilities of and demand for complex autonomous multi-agent systems, including networks of unmanned aerial vehicles and mobile robots, are rapidly increasing in both research and industry settings. As the size and complexity of these systems increase, dealing with faults and failures becomes a crucial element that must be accounted for when performing control design. In addition, the last decade has witnessed an ever-accelerating proliferation of adversarial attacks on cyber-physical systems across the globe. In response to these challenges, recent years have seen an increased focus on resilience of multi-agent systems to faults and adversarial attacks. Broadly speaking, resilience refers to the ability of a system to accomplish control or performance objectives despite the presence of faults or attacks. Ensuring the resilience of cyber-physical systems is an interdisciplinary endeavor that can be tackled using a variety of methodologies. This dissertation approaches the resilience of such systems from a control-theoretic viewpoint and presents several novel advancements in resilient control methodologies. First, advancements in resilient consensus techniques are presented that allow normally-behaving agents to achieve state agreement in the presence of adversarial misinformation. Second, graph theoretic tools for constructing and analyzing the resilience of multi-agent networks are derived. Third, a method for resilient broadcasting vector-valued information from a set of leaders to a set of followers in the presence of adversarial misinformation is presented, and these results are applied to the problem of propagating entire knowledge of time-varying Bezier-curve-based trajectories from leaders to followers. Finally, novel results are presented for guaranteeing safety preservation of heterogeneous control-affine multi-agent systems with sampled-data dynamics in the presence of adversarial agents.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168102/1/usevitch_1.pd

    Resilience of multi-robot systems to physical masquerade attacks

    Full text link
    The advent of autonomous mobile multi-robot systems has driven innovation in both the industrial and defense sectors. The integration of such systems in safety-and security-critical applications has raised concern over their resilience to attack. In this work, we investigate the security problem of a stealthy adversary masquerading as a properly functioning agent. We show that conventional multi-agent pathfinding solutions are vulnerable to these physical masquerade attacks. Furthermore, we provide a constraint-based formulation of multi-agent pathfinding that yields multi-agent plans that are provably resilient to physical masquerade attacks. This formalization leverages inter-agent observations to facilitate introspective monitoring to guarantee resilience.Accepted manuscrip

    Masquerade attack detection through observation planning for multi-robot systems

    Full text link
    The increasing adoption of autonomous mobile robots comes with a rising concern over the security of these systems. In this work, we examine the dangers that an adversary could pose in a multi-agent robot system. We show that conventional multi-agent plans are vulnerable to strong attackers masquerading as a properly functioning agent. We propose a novel technique to incorporate attack detection into the multi-agent path-finding problem through the simultaneous synthesis of observation plans. We show that by specially crafting the multi-agent plan, the induced inter-agent observations can provide introspective monitoring guarantees; we achieve guarantees that any adversarial agent that plans to break the system-wide security specification must necessarily violate the induced observation plan.Accepted manuscrip
    • …
    corecore