Recent developments in robot technology have contributed to the advancement of autonomous
behaviours in human-robot systems; for example, in following instructions
received from an interacting human partner. Nevertheless, increasingly many systems
are moving towards more seamless forms of interaction, where factors such as implicit
trust and persuasion between humans and robots are brought to the fore. In this context,
the problem of attaining, through suitable computational models and algorithms,
more complex strategic behaviours that can influence human decisions and actions
during an interaction, remains largely open. To address this issue, this thesis introduces
the problem of decision shaping in strategic interactions between humans and
robots, where a robot seeks to lead, without however forcing, an interacting human
partner to a particular state. Our approach to this problem is based on a combination
of statistical modeling and synthesis of demonstrated behaviours, which enables
robots to efficiently adapt to novel interacting agents. We primarily focus on interactions
between autonomous and teleoperated (i.e. human-controlled) NAO humanoid
robots, using the adversarial soccer penalty shooting game as an illustrative example.
We begin by describing the various challenges that a robot operating in such complex
interactive environments is likely to face. Then, we introduce a procedure through
which composable strategy templates can be learned from provided human demonstrations
of interactive behaviours. We subsequently present our primary contribution
to the shaping problem, a Bayesian learning framework that empirically models and
predicts the responses of an interacting agent, and computes action strategies that are
likely to influence that agent towards a desired goal. We then address the related issue
of factors affecting human decisions in these interactive strategic environments,
such as the availability of perceptual information for the human operator. Finally, we
describe an information processing algorithm, based on the Orient motion capture platform,
which serves to facilitate direct (as opposed to teleoperation-mediated) strategic
interactions between humans and robots. Our experiments introduce and evaluate a
wide range of novel autonomous behaviours, where robots are shown to (learn to) influence
a variety of interacting agents, ranging from other simple autonomous agents,
to robots controlled by experienced human subjects. These results demonstrate the
benefits of strategic reasoning in human-robot interaction, and constitute an important
step towards realistic, practical applications, where robots are expected to be not just
passive agents, but active, influencing participants