61,927 research outputs found
Efficient Model Learning for Human-Robot Collaborative Tasks
We present a framework for learning human user models from joint-action
demonstrations that enables the robot to compute a robust policy for a
collaborative task with a human. The learning takes place completely
automatically, without any human intervention. First, we describe the
clustering of demonstrated action sequences into different human types using an
unsupervised learning algorithm. These demonstrated sequences are also used by
the robot to learn a reward function that is representative for each type,
through the employment of an inverse reinforcement learning algorithm. The
learned model is then used as part of a Mixed Observability Markov Decision
Process formulation, wherein the human type is a partially observable variable.
With this framework, we can infer, either offline or online, the human type of
a new user that was not included in the training set, and can compute a policy
for the robot that will be aligned to the preference of this new user and will
be robust to deviations of the human actions from prior demonstrations. Finally
we validate the approach using data collected in human subject experiments, and
conduct proof-of-concept demonstrations in which a person performs a
collaborative task with a small industrial robot
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Stochastic Inverse Reinforcement Learning
The goal of the inverse reinforcement learning (IRL) problem is to recover
the reward functions from expert demonstrations. However, the IRL problem like
any ill-posed inverse problem suffers the congenital defect that the policy may
be optimal for many reward functions, and expert demonstrations may be optimal
for many policies. In this work, we generalize the IRL problem to a well-posed
expectation optimization problem stochastic inverse reinforcement learning
(SIRL) to recover the probability distribution over reward functions. We adopt
the Monte Carlo expectation-maximization (MCEM) method to estimate the
parameter of the probability distribution as the first solution to the SIRL
problem. The solution is succinct, robust, and transferable for a learning task
and can generate alternative solutions to the IRL problem. Through our
formulation, it is possible to observe the intrinsic property for the IRL
problem from a global viewpoint, and our approach achieves a considerable
performance on the objectworld.Comment: 8+2 pages, 5 figures, Under Revie
- …