154 research outputs found
Masquerade attack detection through observation planning for multi-robot systems
The increasing adoption of autonomous mobile robots comes with
a rising concern over the security of these systems. In this work, we
examine the dangers that an adversary could pose in a multi-agent
robot system. We show that conventional multi-agent plans are
vulnerable to strong attackers masquerading as a properly functioning
agent. We propose a novel technique to incorporate attack
detection into the multi-agent path-finding problem through the
simultaneous synthesis of observation plans. We show that by
specially crafting the multi-agent plan, the induced inter-agent
observations can provide introspective monitoring guarantees; we
achieve guarantees that any adversarial agent that plans to break
the system-wide security specification must necessarily violate the
induced observation plan.Accepted manuscrip
Resilience of multi-robot systems to physical masquerade attacks
The advent of autonomous mobile multi-robot systems has driven innovation in both the industrial and defense sectors. The integration of such systems in safety-and security-critical applications has raised concern over their resilience to attack. In this work, we investigate the security problem of a stealthy adversary masquerading as a properly functioning agent. We show that conventional multi-agent pathfinding solutions are vulnerable to these physical masquerade attacks. Furthermore, we provide a constraint-based formulation of multi-agent pathfinding that yields multi-agent plans that are provably resilient to physical masquerade attacks. This formalization leverages inter-agent observations to facilitate introspective monitoring to guarantee resilience.Accepted manuscrip
Crowd Vetting: Rejecting Adversaries via Collaboration--with Application to Multi-Robot Flocking
We characterize the advantage of using a robot's neighborhood to find and
eliminate adversarial robots in the presence of a Sybil attack. We show that by
leveraging the opinions of its neighbors on the trustworthiness of transmitted
data, robots can detect adversaries with high probability. We characterize a
number of communication rounds required to achieve this result to be a function
of the communication quality and the proportion of legitimate to malicious
robots. This result enables increased resiliency of many multi-robot
algorithms. Because our results are finite time and not asymptotic, they are
particularly well-suited for problems with a time critical nature. We develop
two algorithms, \emph{FindSpoofedRobots} that determines trusted neighbors with
high probability, and \emph{FindResilientAdjacencyMatrix} that enables
distributed computation of graph properties in an adversarial setting. We apply
our methods to a flocking problem where a team of robots must track a moving
target in the presence of adversarial robots. We show that by using our
algorithms, the team of robots are able to maintain tracking ability of the
dynamic target
Guaranteeing Spoof-Resilient Multi-Robot Networks
Multi-robot networks use wireless communication to provide wide-ranging services such as aerial surveillance and unmanned delivery. However, effective coordination between multiple robots requires trust, making them particularly vulnerable to cyber-attacks. Specifically, such networks can be gravely disrupted by the Sybil attack, where even a single malicious robot can spoof a large number of fake clients. This paper proposes a new solution to defend against the Sybil attack, without requiring expensive cryptographic key-distribution. Our core contribution is a novel algorithm implemented on commercial Wi-Fi radios that can "sense" spoofers using the physics of wireless signals. We derive theoretical guarantees on how this algorithm bounds the impact of the Sybil Attack on a broad class of robotic coverage problems. We experimentally validate our claims using a team of AscTec quadrotor servers and iRobot Create ground clients, and demonstrate spoofer detection rates over 96%
A Filtering Approach for Resiliency of Distributed Observers against Smart Spoofers
A network of observers is considered, where through asynchronous (with
bounded delay) communications, they all estimate the states of a Linear
Time-Invariant (LTI) system. In such setting, a new type of adversarial nodes
might affect the observation process by impersonating the identity of the
regular nodes, which is a violation against communication authenticity. These
adversaries also inherit the capabilities of Byzantine nodes making them more
powerful threats called smart spoofers. We show how asynchronous networks are
vulnerable to smart spoofing attack. In the estimation scheme considered in
this paper, information are flowed from the sets of source nodes, which can
detect a portion of the state variables each, to the other follower nodes. The
regular nodes, to avoid getting misguided by the threats, distributively filter
the extreme values received from the nodes in their neighborhood. Topological
conditions based on graph strong robustness are proposed to guarantee the
convergence. Two simulation scenarios are provided to verify the results
How Physicality Enables Trust: A New Era of Trust-Centered Cyberphysical Systems
Multi-agent cyberphysical systems enable new capabilities in efficiency,
resilience, and security. The unique characteristics of these systems prompt a
reevaluation of their security concepts, including their vulnerabilities, and
mechanisms to mitigate these vulnerabilities. This survey paper examines how
advancement in wireless networking, coupled with the sensing and computing in
cyberphysical systems, can foster novel security capabilities. This study
delves into three main themes related to securing multi-agent cyberphysical
systems. First, we discuss the threats that are particularly relevant to
multi-agent cyberphysical systems given the potential lack of trust between
agents. Second, we present prospects for sensing, contextual awareness, and
authentication, enabling the inference and measurement of ``inter-agent trust"
for these systems. Third, we elaborate on the application of quantifiable trust
notions to enable ``resilient coordination," where ``resilient" signifies
sustained functionality amid attacks on multiagent cyberphysical systems. We
refer to the capability of cyberphysical systems to self-organize, and
coordinate to achieve a task as autonomy. This survey unveils the cyberphysical
character of future interconnected systems as a pivotal catalyst for realizing
robust, trust-centered autonomy in tomorrow's world
A Theory of Mind Approach as Test-Time Mitigation Against Emergent Adversarial Communication
Multi-Agent Systems (MAS) is the study of multi-agent interactions in a
shared environment. Communication for cooperation is a fundamental construct
for sharing information in partially observable environments. Cooperative
Multi-Agent Reinforcement Learning (CoMARL) is a learning framework where we
learn agent policies either with cooperative mechanisms or policies that
exhibit cooperative behavior. Explicitly, there are works on learning to
communicate messages from CoMARL agents; however, non-cooperative agents, when
capable of access a cooperative team's communication channel, have been shown
to learn adversarial communication messages, sabotaging the cooperative team's
performance particularly when objectives depend on finite resources. To address
this issue, we propose a technique which leverages local formulations of
Theory-of-Mind (ToM) to distinguish exhibited cooperative behavior from
non-cooperative behavior before accepting messages from any agent. We
demonstrate the efficacy and feasibility of the proposed technique in empirical
evaluations in a centralized training, decentralized execution (CTDE) CoMARL
benchmark. Furthermore, while we propose our explicit ToM defense for
test-time, we emphasize that ToM is a construct for designing a cognitive
defense rather than be the objective of the defense.Comment: 6 pages, 7 figure
Managing Byzantine Robots via Blockchain Technology in a Swarm Robotics Collective Decision Making Scenario
While swarm robotics systems are often claimed to be highly fault-tolerant, so far research has limited its attention to safe laboratory settings and has virtually ignored security issues in the presence of Byzantine robots—i.e., robots with arbitrarily faulty or malicious behavior. However, in many applications one or more Byzantine robots may suffice to let current swarm coordination mechanisms fail with unpredictable or disastrous outcomes. In this paper, we provide a proof-of-concept for managing security issues in swarm robotics systems via blockchain technology. Our approach uses decentralized programs executed via blockchain technology (blockchain-based smart contracts) to establish secure swarm coordination mechanisms and to identify and exclude Byzantine swarm members. We studied the performance of our blockchain-based approach in a collective decision-making scenario both in the presence and absence of Byzantine robots and compared our results to those obtained with an existing collective decision approach. The results show a clear advantage of the blockchain approach when Byzantine robots are part of the swarm.Marie Skłodowska-Curie actions (EU project BROS - DLV-751615
- …