8 research outputs found
The Role of Models and Communication in the Ad Hoc Multiagent Team Decision Problem
Abstract Ad hoc teams are formed of members who have little or no information regarding one another. In order to achieve a shared goal, agents are tasked with learning the capabilities of their teammates such that they can coordinate effectively. Typically, the capabilities of the agent teammates encountered are constrained by the particular domain specifications. However, for wide application, it is desirable to develop systems that are able to coordinate with general ad hoc agents independent of the choice of domain. We propose examining ad hoc multiagent teamwork from a generalized perspective and discuss existing domains within the context of our framework. Furthermore, we consider how communication of agent intentions can provide a means of reducing teammate model uncertainty at key junctures, requiring an agent to consider its own information deficiencies in order to form communicative acts improving team coordination
Automatically Characterizing Product and Process Incentives in Collective Intelligence
Social media facilitate interaction and information dissemination among an unprecedented number of participants. Why do users contribute, and why do they contribute to a specific venue? Does the information they receive cover all relevant points of view, or is it biased? The substantial and increasing importance of online communication makes these questions more pressing, but also puts answers within reach of automated methods. I investigate scalable algorithms for understanding two classes of incentives which arise in collective intelligence processes. Product incentives exist when contributors have a stake in the information delivered to other users. I investigate product-relevant user behavior changes, algorithms for characterizing the topics and points of view presented in peer-produced content, and the results of a field experiment with a prediction market framework having associated product incentives. Process incentives exist when users find contributing to be intrinsically rewarding. Algorithms which are aware of process incentives predict the effect of feedback on where users will make contributions, and can learn about the structure of a conversation by observing when users choose to participate in it. Learning from large-scale social interactions allows us to monitor the quality of information and the health of venues, but also provides fresh insights into human behavior
Recommended from our members
Fly with me : algorithms and methods for influencing a flock
As robots become more affordable, they will begin to exist in the world in greater quantities. Some of these robots will likely be designed to act as components in specific teams. These teams could work on tasks that are too large or complex for a single robot - or that are merely more efficiently accomplished by a team - such as surveillance in a large building or product delivery to packers in a warehouse. Multiagent systems research studies how these teams are formed and how they work together.
Ad hoc teamwork, a newer area of multiagent systems research, studies how new robots can join these pre-existing teams and assist the team in accomplishing its goal. This dissertation extends and applies research in ad hoc teamwork towards the general area of flocking, which is an emergent swarm behavior. In particular, the work in this dissertation considers how ad hoc agents - called influencing agents in this dissertation - can join a flock, be recognized by the rest of the flock as part of the flock, influence the flock towards particular behaviors through their own behavior, and then separate from the flock. Specifically, the primary research question addressed in this dissertation is How can influencing agents be utilized in various types of flocks to influence the flock towards a particular behavior?
In order to address this research question, this dissertation makes six main types of contributions. First, this dissertation formalizes the problem of using influencing agents to influence a flock. Second, this dissertation contributes and analyzes algorithms for influencing a flock to a desired orientation. Third, this dissertation presents methods for determining how to best add influencing agents to a flock. Fourth, this dissertation provides methods by which influencing agents can join and then leave a flock in motion. Fifth, this dissertation evaluates some of the influencing agent algorithms on a robot platform. Sixth, although the majority of this dissertation assumes the influencing agents will join a flock that behaves similarly to European starlings, this dissertation also provides insight into when and how its algorithms are generalizable to other types of flocks as well as to general teamwork and coordination research. All of the methods presented in this dissertation are empirically evaluated using a simulator that can support large flocks.Computer Science
Decision shaping and strategy learning in multi-robot interactions
Recent developments in robot technology have contributed to the advancement of autonomous
behaviours in human-robot systems; for example, in following instructions
received from an interacting human partner. Nevertheless, increasingly many systems
are moving towards more seamless forms of interaction, where factors such as implicit
trust and persuasion between humans and robots are brought to the fore. In this context,
the problem of attaining, through suitable computational models and algorithms,
more complex strategic behaviours that can influence human decisions and actions
during an interaction, remains largely open. To address this issue, this thesis introduces
the problem of decision shaping in strategic interactions between humans and
robots, where a robot seeks to lead, without however forcing, an interacting human
partner to a particular state. Our approach to this problem is based on a combination
of statistical modeling and synthesis of demonstrated behaviours, which enables
robots to efficiently adapt to novel interacting agents. We primarily focus on interactions
between autonomous and teleoperated (i.e. human-controlled) NAO humanoid
robots, using the adversarial soccer penalty shooting game as an illustrative example.
We begin by describing the various challenges that a robot operating in such complex
interactive environments is likely to face. Then, we introduce a procedure through
which composable strategy templates can be learned from provided human demonstrations
of interactive behaviours. We subsequently present our primary contribution
to the shaping problem, a Bayesian learning framework that empirically models and
predicts the responses of an interacting agent, and computes action strategies that are
likely to influence that agent towards a desired goal. We then address the related issue
of factors affecting human decisions in these interactive strategic environments,
such as the availability of perceptual information for the human operator. Finally, we
describe an information processing algorithm, based on the Orient motion capture platform,
which serves to facilitate direct (as opposed to teleoperation-mediated) strategic
interactions between humans and robots. Our experiments introduce and evaluate a
wide range of novel autonomous behaviours, where robots are shown to (learn to) influence
a variety of interacting agents, ranging from other simple autonomous agents,
to robots controlled by experienced human subjects. These results demonstrate the
benefits of strategic reasoning in human-robot interaction, and constitute an important
step towards realistic, practical applications, where robots are expected to be not just
passive agents, but active, influencing participants