12 research outputs found
Recommended from our members
Anytime Planning for Decentralized Multirobot Active Information Gathering
Adaptive Information Gathering via Imitation Learning
In the adaptive information gathering problem, a policy is required to select
an informative sensing location using the history of measurements acquired thus
far. While there is an extensive amount of prior work investigating effective
practical approximations using variants of Shannon's entropy, the efficacy of
such policies heavily depends on the geometric distribution of objects in the
world. On the other hand, the principled approach of employing online POMDP
solvers is rendered impractical by the need to explicitly sample online from a
posterior distribution of world maps.
We present a novel data-driven imitation learning framework to efficiently
train information gathering policies. The policy imitates a clairvoyant oracle
- an oracle that at train time has full knowledge about the world map and can
compute maximally informative sensing locations. We analyze the learnt policy
by showing that offline imitation of a clairvoyant oracle is implicitly
equivalent to online oracle execution in conjunction with posterior sampling.
This observation allows us to obtain powerful near-optimality guarantees for
information gathering problems possessing an adaptive sub-modularity property.
As demonstrated on a spectrum of 2D and 3D exploration problems, the trained
policies enjoy the best of both worlds - they adapt to different world map
distributions while being computationally inexpensive to evaluate.Comment: Robotics Science and Systems, 201
Adaptive Informative Path Planning with Multimodal Sensing
Adaptive Informative Path Planning (AIPP) problems model an agent tasked with
obtaining information subject to resource constraints in unknown, partially
observable environments. Existing work on AIPP has focused on representing
observations about the world as a result of agent movement. We formulate the
more general setting where the agent may choose between different sensors at
the cost of some energy, in addition to traversing the environment to gather
information. We call this problem AIPPMS (MS for Multimodal Sensing). AIPPMS
requires reasoning jointly about the effects of sensing and movement in terms
of both energy expended and information gained. We frame AIPPMS as a Partially
Observable Markov Decision Process (POMDP) and solve it with online planning.
Our approach is based on the Partially Observable Monte Carlo Planning
framework with modifications to ensure constraint feasibility and a heuristic
rollout policy tailored for AIPPMS. We evaluate our method on two domains: a
simulated search-and-rescue scenario and a challenging extension to the classic
RockSample problem. We find that our approach outperforms a classic AIPP
algorithm that is modified for AIPPMS, as well as online planning using a
random rollout policy.Comment: First two authors contributed equally; International Conference on
Automated Planning and Scheduling (ICAPS) 202
ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture
The accuracy of monocular 3D human pose estimation depends on the viewpoint
from which the image is captured. While freely moving cameras, such as on
drones, provide control over this viewpoint, automatically positioning them at
the location which will yield the highest accuracy remains an open problem.
This is the problem that we address in this paper. Specifically, given a short
video sequence, we introduce an algorithm that predicts which viewpoints should
be chosen to capture future frames so as to maximize 3D human pose estimation
accuracy. The key idea underlying our approach is a method to estimate the
uncertainty of the 3D body pose estimates. We integrate several sources of
uncertainty, originating from deep learning based regressors and temporal
smoothness. Our motion planner yields improved 3D body pose estimates and
outperforms or matches existing ones that are based on person following and
orbiting.Comment: For associated video, see https://youtu.be/i58Bu-hbZHs Published in
CVPR 202