6,292 research outputs found
Flexible human-robot cooperation models for assisted shop-floor tasks
The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative
robots, i.e., robots able to work alongside and together with humans, could
bring to the whole production process. In this context, an enabling technology
yet unreached is the design of flexible robots able to deal at all levels with
humans' intrinsic variability, which is not only a necessary element for a
comfortable working experience for the person but also a precious capability
for efficiently dealing with unexpected events. In this paper, a sensing,
representation, planning and control architecture for flexible human-robot
cooperation, referred to as FlexHRC, is proposed. FlexHRC relies on wearable
sensors for human action recognition, AND/OR graphs for the representation of
and reasoning upon cooperation models, and a Task Priority framework to
decouple action planning from robot motion planning and control.Comment: Submitted to Mechatronics (Elsevier
Closed-loop Bayesian Semantic Data Fusion for Collaborative Human-Autonomy Target Search
In search applications, autonomous unmanned vehicles must be able to
efficiently reacquire and localize mobile targets that can remain out of view
for long periods of time in large spaces. As such, all available information
sources must be actively leveraged -- including imprecise but readily available
semantic observations provided by humans. To achieve this, this work develops
and validates a novel collaborative human-machine sensing solution for dynamic
target search. Our approach uses continuous partially observable Markov
decision process (CPOMDP) planning to generate vehicle trajectories that
optimally exploit imperfect detection data from onboard sensors, as well as
semantic natural language observations that can be specifically requested from
human sensors. The key innovation is a scalable hierarchical Gaussian mixture
model formulation for efficiently solving CPOMDPs with semantic observations in
continuous dynamic state spaces. The approach is demonstrated and validated
with a real human-robot team engaged in dynamic indoor target search and
capture scenarios on a custom testbed.Comment: Final version accepted and submitted to 2018 FUSION Conference
(Cambridge, UK, July 2018
Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
Imitation learning is an effective approach for autonomous systems to acquire
control policies when an explicit reward function is unavailable, using
supervision provided as demonstrations from an expert, typically a human
operator. However, standard imitation learning methods assume that the agent
receives examples of observation-action tuples that could be provided, for
instance, to a supervised learning algorithm. This stands in contrast to how
humans and animals imitate: we observe another person performing some behavior
and then figure out which actions will realize that behavior, compensating for
changes in viewpoint, surroundings, object positions and types, and other
factors. We term this kind of imitation learning "imitation-from-observation,"
and propose an imitation learning method based on video prediction with context
translation and deep reinforcement learning. This lifts the assumption in
imitation learning that the demonstration should consist of observations in the
same environment configuration, and enables a variety of interesting
applications, including learning robotic skills that involve tool use simply by
observing videos of human tool use. Our experimental results show the
effectiveness of our approach in learning a wide range of real-world robotic
tasks modeled after common household chores from videos of a human
demonstrator, including sweeping, ladling almonds, pushing objects as well as a
number of tasks in simulation.Comment: Accepted at ICRA 2018, Brisbane. YuXuan Liu and Abhishek Gupta had
equal contributio
Model of Competencies for Decomposition of Human Behavior: Application to Control System of Robots
Humans and machines have shared the same physical space for many years. To share the same space, we want the robots to behave like human beings. This will facilitate their social integration, their interaction with humans and create an intelligent behavior. To achieve this goal, we need to understand how human behavior is generated, analyze tasks running our nerves and how they relate to them. Then and only then can we implement these mechanisms in robotic beings. In this study, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this study has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots
- …