8,707 research outputs found
Closed-loop Bayesian Semantic Data Fusion for Collaborative Human-Autonomy Target Search
In search applications, autonomous unmanned vehicles must be able to
efficiently reacquire and localize mobile targets that can remain out of view
for long periods of time in large spaces. As such, all available information
sources must be actively leveraged -- including imprecise but readily available
semantic observations provided by humans. To achieve this, this work develops
and validates a novel collaborative human-machine sensing solution for dynamic
target search. Our approach uses continuous partially observable Markov
decision process (CPOMDP) planning to generate vehicle trajectories that
optimally exploit imperfect detection data from onboard sensors, as well as
semantic natural language observations that can be specifically requested from
human sensors. The key innovation is a scalable hierarchical Gaussian mixture
model formulation for efficiently solving CPOMDPs with semantic observations in
continuous dynamic state spaces. The approach is demonstrated and validated
with a real human-robot team engaged in dynamic indoor target search and
capture scenarios on a custom testbed.Comment: Final version accepted and submitted to 2018 FUSION Conference
(Cambridge, UK, July 2018
Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids
Real-life control tasks involve matters of various substances---rigid or soft
bodies, liquid, gas---each with distinct physical behaviors. This poses
challenges to traditional rigid-body physics engines. Particle-based simulators
have been developed to model the dynamics of these complex scenes; however,
relying on approximation techniques, their simulation often deviates from
real-world physics, especially in the long term. In this paper, we propose to
learn a particle-based simulator for complex control tasks. Combining learning
with particle-based systems brings in two major benefits: first, the learned
simulator, just like other particle-based systems, acts widely on objects of
different materials; second, the particle-based representation poses strong
inductive bias for learning: particles of the same type have the same dynamics
within. This enables the model to quickly adapt to new environments of unknown
dynamics within a few observations. We demonstrate robots achieving complex
manipulation tasks using the learned simulator, such as manipulating fluids and
deformable foam, with experiments both in simulation and in the real world. Our
study helps lay the foundation for robot learning of dynamic scenes with
particle-based representations.Comment: Accepted to ICLR 2019. Project Page: http://dpi.csail.mit.edu Video:
https://www.youtube.com/watch?v=FrPpP7aW3L
Deep Visual Foresight for Planning Robot Motion
A key challenge in scaling up robot learning to many skills and environments
is removing the need for human supervision, so that robots can collect their
own data and improve their own performance without being limited by the cost of
requesting human feedback. Model-based reinforcement learning holds the promise
of enabling an agent to learn to predict the effects of its actions, which
could provide flexible predictive models for a wide range of tasks and
environments, without detailed human supervision. We develop a method for
combining deep action-conditioned video prediction models with model-predictive
control that uses entirely unlabeled training data. Our approach does not
require a calibrated camera, an instrumented training set-up, nor precise
sensing and actuation. Our results show that our method enables a real robot to
perform nonprehensile manipulation -- pushing objects -- and can handle novel
objects not seen during training.Comment: ICRA 2017. Supplementary video:
https://sites.google.com/site/robotforesight
A Hierarchal Planning Framework for AUV Mission Management in a Spatio-Temporal Varying Ocean
The purpose of this paper is to provide a hierarchical dynamic mission
planning framework for a single autonomous underwater vehicle (AUV) to
accomplish task-assign process in a limited time interval while operating in an
uncertain undersea environment, where spatio-temporal variability of the
operating field is taken into account. To this end, a high level reactive
mission planner and a low level motion planning system are constructed. The
high level system is responsible for task priority assignment and guiding the
vehicle toward a target of interest considering on-time termination of the
mission. The lower layer is in charge of generating optimal trajectories based
on sequence of tasks and dynamicity of operating terrain. The mission planner
is able to reactively re-arrange the tasks based on mission/terrain updates
while the low level planner is capable of coping unexpected changes of the
terrain by correcting the old path and re-generating a new trajectory. As a
result, the vehicle is able to undertake the maximum number of tasks with
certain degree of maneuverability having situational awareness of the operating
field. The computational engine of the mentioned framework is based on the
biogeography based optimization (BBO) algorithm that is capable of providing
efficient solutions. To evaluate the performance of the proposed framework,
firstly, a realistic model of undersea environment is provided based on
realistic map data, and then several scenarios, treated as real experiments,
are designed through the simulation study. Additionally, to show the robustness
and reliability of the framework, Monte-Carlo simulation is carried out and
statistical analysis is performed. The results of simulations indicate the
significant potential of the two-level hierarchical mission planning system in
mission success and its applicability for real-time implementation
Recovering from External Disturbances in Online Manipulation through State-Dependent Revertive Recovery Policies
Robots are increasingly entering uncertain and unstructured environments.
Within these, robots are bound to face unexpected external disturbances like
accidental human or tool collisions. Robots must develop the capacity to
respond to unexpected events. That is not only identifying the sudden anomaly,
but also deciding how to handle it. In this work, we contribute a recovery
policy that allows a robot to recovery from various anomalous scenarios across
different tasks and conditions in a consistent and robust fashion. The system
organizes tasks as a sequence of nodes composed of internal modules such as
motion generation and introspection. When an introspection module flags an
anomaly, the recovery strategy is triggered and reverts the task execution by
selecting a target node as a function of a state dependency chart. The new
skill allows the robot to overcome the effects of the external disturbance and
conclude the task. Our system recovers from accidental human and tool
collisions in a number of tasks. Of particular importance is the fact that we
test the robustness of the recovery system by triggering anomalies at each node
in the task graph showing robust recovery everywhere in the task. We also
trigger multiple and repeated anomalies at each of the nodes of the task
showing that the recovery system can consistently recover anywhere in the
presence of strong and pervasive anomalous conditions. Robust recovery systems
will be key enablers for long-term autonomy in robot systems. Supplemental info
including code, data, graphs, and result analysis can be found at [1].Comment: 8 pages, 8 figures, 1 tabl
- …