17,002 research outputs found
Towards time-varying proximal dynamics in Multi-Agent Network Games
Distributed decision making in multi-agent networks has recently attracted
significant research attention thanks to its wide applicability, e.g. in the
management and optimization of computer networks, power systems, robotic teams,
sensor networks and consumer markets. Distributed decision-making problems can
be modeled as inter-dependent optimization problems, i.e., multi-agent
game-equilibrium seeking problems, where noncooperative agents seek an
equilibrium by communicating over a network. To achieve a network equilibrium,
the agents may decide to update their decision variables via proximal dynamics,
driven by the decision variables of the neighboring agents. In this paper, we
provide an operator-theoretic characterization of convergence with a
time-invariant communication network. For the time-varying case, we consider
adjacency matrices that may switch subject to a dwell time. We illustrate our
investigations using a distributed robotic exploration example.Comment: 6 pages, 3 figure
Human-Robot Trust Integrated Task Allocation and Symbolic Motion planning for Heterogeneous Multi-robot Systems
This paper presents a human-robot trust integrated task allocation and motion
planning framework for multi-robot systems (MRS) in performing a set of tasks
concurrently. A set of task specifications in parallel are conjuncted with MRS
to synthesize a task allocation automaton. Each transition of the task
allocation automaton is associated with the total trust value of human in
corresponding robots. Here, the human-robot trust model is constructed with a
dynamic Bayesian network (DBN) by considering individual robot performance,
safety coefficient, human cognitive workload and overall evaluation of task
allocation. Hence, a task allocation path with maximum encoded human-robot
trust can be searched based on the current trust value of each robot in the
task allocation automaton. Symbolic motion planning (SMP) is implemented for
each robot after they obtain the sequence of actions. The task allocation path
can be intermittently updated with this DBN based trust model. The overall
strategy is demonstrated by a simulation with 5 robots and 3 parallel subtask
automata
Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions
The focus of this paper is on solving multi-robot planning problems in
continuous spaces with partial observability. Decentralized partially
observable Markov decision processes (Dec-POMDPs) are general models for
multi-robot coordination problems, but representing and solving Dec-POMDPs is
often intractable for large problems. To allow for a high-level representation
that is natural for multi-robot problems and scalable to large discrete and
continuous problems, this paper extends the Dec-POMDP model to the
decentralized partially observable semi-Markov decision process (Dec-POSMDP).
The Dec-POSMDP formulation allows asynchronous decision-making by the robots,
which is crucial in multi-robot domains. We also present an algorithm for
solving this Dec-POSMDP which is much more scalable than previous methods since
it can incorporate closed-loop belief space macro-actions in planning. These
macro-actions are automatically constructed to produce robust solutions. The
proposed method's performance is evaluated on a complex multi-robot package
delivery problem under uncertainty, showing that our approach can naturally
represent multi-robot problems and provide high-quality solutions for
large-scale problems
Planning for Decentralized Control of Multiple Robots Under Uncertainty
We describe a probabilistic framework for synthesizing control policies for
general multi-robot systems, given environment and sensor models and a cost
function. Decentralized, partially observable Markov decision processes
(Dec-POMDPs) are a general model of decision processes where a team of agents
must cooperate to optimize some objective (specified by a shared reward or cost
function) in the presence of uncertainty, but where communication limitations
mean that the agents cannot share their state, so execution must proceed in a
decentralized fashion. While Dec-POMDPs are typically intractable to solve for
real-world problems, recent research on the use of macro-actions in Dec-POMDPs
has significantly increased the size of problem that can be practically solved
as a Dec-POMDP. We describe this general model, and show how, in contrast to
most existing methods that are specialized to a particular problem class, it
can synthesize control policies that use whatever opportunities for
coordination are present in the problem, while balancing off uncertainty in
outcomes, sensor information, and information about other agents. We use three
variations on a warehouse task to show that a single planner of this type can
generate cooperative behavior using task allocation, direct communication, and
signaling, as appropriate
Beyond Basins of Attraction: Quantifying Robustness of Natural Dynamics
Properly designing a system to exhibit favorable natural dynamics can greatly
simplify designing or learning the control policy. However, it is still unclear
what constitutes favorable natural dynamics and how to quantify its effect.
Most studies of simple walking and running models have focused on the basins of
attraction of passive limit-cycles and the notion of self-stability. We instead
emphasize the importance of stepping beyond basins of attraction. We show an
approach based on viability theory to quantify robust sets in state-action
space. These sets are valid for the family of all robust control policies,
which allows us to quantify the robustness inherent to the natural dynamics
before designing the control policy or specifying a control objective. We
illustrate our formulation using spring-mass models, simple low dimensional
models of running systems. We then show an example application by optimizing
robustness of a simulated planar monoped, using a gradient-free optimization
scheme. Both case studies result in a nonlinear effective stiffness providing
more robustness.Comment: 15 pages. This work has been accepted to IEEE Transactions on
Robotics (2019
- …