5,930 research outputs found
A Uniform Treatment of Architectures in Decentralized Discrete-Event System
Solutions to decentralized discrete-event systems problems are characterized
by the way local decisions are fused to yield a global decision. A fusion rule
is colloquially called an architecture. This paper provides a uniform treatment
of architectures in decentralized discrete-event systems. Current approaches
neither provide a direct way to determine problem solvability conditions under
one architecture, nor a way to compare existing architectures. Determining
whether a new architecture is more general than an existing known architecture
relies on producing examples ad hoc and on individual inspiration that puts the
conditions for solvability in each architecture into some form that admits
comparison. From these research efforts, a method based on morphisms between
graphs has been extracted to yield a uniform approach to decentralized
discrete-event system architectures and their attendant fusion rules. This
treatment provides an easy and direct way to compare the fusion rules -- and
hence to compare the strength or generality of the corresponding architectures
Distributed Monitoring of Robot Swarms with Swarm Signal Temporal Logic
In this paper, we develop a distributed monitoring framework for robot swarms
so that the agents can monitor whether the executions of robot swarms satisfy
Swarm Signal Temporal Logic (SwarmSTL) formulas. We define generalized moments
(GMs) to represent swarm features. A dynamic generalized moments consensus
algorithm (GMCA) with Kalman filter (KF) is proposed so that each agent can
estimate the GMs. Also, we obtain an upper bound for the error between an
agent's estimate and the actual GMs. This bound is independent of the motion of
the agents. We also propose rules for monitoring SwarmSTL temporal and logical
operators. As a result, the agents can monitor whether the swarm satisfies
SwarmSTL formulas with a certain confidence level using these rules and the
bound of the estimation error. The distributed monitoring framework is applied
to a swarm transporting supplies example, where we also show the efficacy of
the Kalman filter in the dynamic generalized moments consensus process
Multi-Robot Symbolic Task and Motion Planning Leveraging Human Trust Models: Theory and Applications
Multi-robot systems (MRS) can accomplish more complex tasks with two or more robots and have produced a broad set of applications. The presence of a human operator in an MRS can guarantee the safety of the task performing, but the human operators can be subject to heavier stress and cognitive workload in collaboration with the MRS than the single robot. It is significant for the MRS to have the provable correct task and motion planning solution for a complex task. That can reduce the human workload during supervising the task and improve the reliability of human-MRS collaboration. This dissertation relies on formal verification to provide the provable-correct solution for the robotic system. One of the challenges in task and motion planning under temporal logic task specifications is developing computationally efficient MRS frameworks. The dissertation first presents an automaton-based task and motion planning framework for MRS to satisfy finite words of linear temporal logic (LTL) task specifications in parallel and concurrently. Furthermore, the dissertation develops a computational trust model to improve the human-MRS collaboration for a motion task. Notably, the current works commonly underemphasize the environmental attributes when investigating the impacting factors of human trust in robots. Our computational trust model builds a linear state-space (LSS) equation to capture the influence of environment attributes on human trust in an MRS. A Bayesian optimization based experimental design (BOED) is proposed to sequentially learn the human-MRS trust model parameters in a data-efficient way. Finally, the dissertation shapes a reward function for the human-MRS collaborated complex task by referring to the above LTL task specification and computational trust model. A Bayesian active reinforcement learning (RL) algorithm is used to concurrently learn the shaped reward function and explore the most trustworthy task and motion planning solution
- …