4,848 research outputs found
HARPS: An Online POMDP Framework for Human-Assisted Robotic Planning and Sensing
Autonomous robots can benefit greatly from human-provided semantic
characterizations of uncertain task environments and states. However, the
development of integrated strategies which let robots model, communicate, and
act on such 'soft data' remains challenging. Here, the Human Assisted Robotic
Planning and Sensing (HARPS) framework is presented for active semantic sensing
and planning in human-robot teams to address these gaps by formally combining
the benefits of online sampling-based POMDP policies, multimodal semantic
interaction, and Bayesian data fusion. This approach lets humans
opportunistically impose model structure and extend the range of semantic soft
data in uncertain environments by sketching and labeling arbitrary landmarks
across the environment. Dynamic updating of the environment model while during
search allows robotic agents to actively query humans for novel and relevant
semantic data, thereby improving beliefs of unknown environments and states for
improved online planning. Simulations of a UAV-enabled target search
application in a large-scale partially structured environment show significant
improvements in time and belief state estimates required for interception
versus conventional planning based solely on robotic sensing. Human subject
studies in the same environment (n = 36) demonstrate an average doubling in
dynamic target capture rate compared to the lone robot case, and highlight the
robustness of active probabilistic reasoning and semantic sensing over a range
of user characteristics and interaction modalities
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Assistive Planning in Complex, Dynamic Environments: a Probabilistic Approach
We explore the probabilistic foundations of shared control in complex dynamic
environments. In order to do this, we formulate shared control as a random
process and describe the joint distribution that governs its behavior. For
tractability, we model the relationships between the operator, autonomy, and
crowd as an undirected graphical model. Further, we introduce an interaction
function between the operator and the robot, that we call "agreeability"; in
combination with the methods developed in~\cite{trautman-ijrr-2015}, we extend
a cooperative collision avoidance autonomy to shared control. We therefore
quantify the notion of simultaneously optimizing over agreeability (between the
operator and autonomy), and safety and efficiency in crowded environments. We
show that for a particular form of interaction function between the autonomy
and the operator, linear blending is recovered exactly. Additionally, to
recover linear blending, unimodal restrictions must be placed on the models
describing the operator and the autonomy. In turn, these restrictions raise
questions about the flexibility and applicability of the linear blending
framework. Additionally, we present an extension of linear blending called
"operator biased linear trajectory blending" (which formalizes some recent
approaches in linear blending such as~\cite{dragan-ijrr-2013}) and show that
not only is this also a restrictive special case of our probabilistic approach,
but more importantly, is statistically unsound, and thus, mathematically,
unsuitable for implementation. Instead, we suggest a statistically principled
approach that guarantees data is used in a consistent manner, and show how this
alternative approach converges to the full probabilistic framework. We conclude
by proving that, in general, linear blending is suboptimal with respect to the
joint metric of agreeability, safety, and efficiency
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
SERKET: An Architecture for Connecting Stochastic Models to Realize a Large-Scale Cognitive Model
To realize human-like robot intelligence, a large-scale cognitive
architecture is required for robots to understand the environment through a
variety of sensors with which they are equipped. In this paper, we propose a
novel framework named Serket that enables the construction of a large-scale
generative model and its inference easily by connecting sub-modules to allow
the robots to acquire various capabilities through interaction with their
environments and others. We consider that large-scale cognitive models can be
constructed by connecting smaller fundamental models hierarchically while
maintaining their programmatic independence. Moreover, connected modules are
dependent on each other, and parameters are required to be optimized as a
whole. Conventionally, the equations for parameter estimation have to be
derived and implemented depending on the models. However, it becomes harder to
derive and implement those of a larger scale model. To solve these problems, in
this paper, we propose a method for parameter estimation by communicating the
minimal parameters between various modules while maintaining their programmatic
independence. Therefore, Serket makes it easy to construct large-scale models
and estimate their parameters via the connection of modules. Experimental
results demonstrated that the model can be constructed by connecting modules,
the parameters can be optimized as a whole, and they are comparable with the
original models that we have proposed
- …