To navigate in an environment safely and autonomously, robots must accurately
estimate where obstacles are and how they move. Instead of using expensive
traditional 3D sensors, we explore the use of a much cheaper, faster, and
higher resolution alternative: programmable light curtains. Light curtains are
a controllable depth sensor that sense only along a surface that the user
selects. We adapt a probabilistic method based on particle filters and
occupancy grids to explicitly estimate the position and velocity of 3D points
in the scene using partial measurements made by light curtains. The central
challenge is to decide where to place the light curtain to accurately perform
this task. We propose multiple curtain placement strategies guided by
maximizing information gain and verifying predicted object locations. Then, we
combine these strategies using an online learning framework. We propose a novel
self-supervised reward function that evaluates the accuracy of current velocity
estimates using future light curtain placements. We use a multi-armed bandit
framework to intelligently switch between placement policies in real time,
outperforming fixed policies. We develop a full-stack navigation system that
uses position and velocity estimates from light curtains for downstream tasks
such as localization, mapping, path-planning, and obstacle avoidance. This work
paves the way for controllable light curtains to accurately, efficiently, and
purposefully perceive and navigate complex and dynamic environments. Project
website: https://siddancha.github.io/projects/active-velocity-estimation/Comment: 9 pages (main paper), 3 pages (references), 9 pages (appendix