Dynamic bipedal walking on discrete terrain, like stepping stones, is a
challenging problem requiring feedback controllers to enforce safety-critical
constraints. To enforce such constraints in real-world experiments, fast and
accurate perception for foothold detection and estimation is needed. In this
work, a deep visual perception model is designed to accurately estimate step
length of the next step, which serves as input to the feedback controller to
enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a
custom convolutional neural network architecture is designed and trained to
predict step length to the next foothold using a sampled image preview of the
upcoming terrain at foot impact. The visual input is offered only at the
beginning of each step and is shown to be sufficient for the job of dynamically
stepping onto discrete footholds. Through extensive numerical studies, we show
that the robot is able to successfully autonomously walk for over 100 steps
without failure on a discrete terrain with footholds randomly positioned within
a step length range of 45-85 centimeters.Comment: Presented at Humanoids 201