54,032 research outputs found
Fast and Continuous Foothold Adaptation for Dynamic Locomotion through CNNs
Legged robots can outperform wheeled machines for most navigation tasks
across unknown and rough terrains. For such tasks, visual feedback is a
fundamental asset to provide robots with terrain-awareness. However, robust
dynamic locomotion on difficult terrains with real-time performance guarantees
remains a challenge. We present here a real-time, dynamic foothold adaptation
strategy based on visual feedback. Our method adjusts the landing position of
the feet in a fully reactive manner, using only on-board computers and sensors.
The correction is computed and executed continuously along the swing phase
trajectory of each leg. To efficiently adapt the landing position, we implement
a self-supervised foothold classifier based on a Convolutional Neural Network
(CNN). Our method results in an up to 200 times faster computation with respect
to the full-blown heuristics. Our goal is to react to visual stimuli from the
environment, bridging the gap between blind reactive locomotion and purely
vision-based planning strategies. We assess the performance of our method on
the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds
up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe
foothold adaptation is clearly demonstrated by the overall robot behavior.Comment: 9 pages, 11 figures. Accepted to RA-L + ICRA 2019, January 201
Aggressive Quadrotor Flight through Narrow Gaps with Onboard Sensing and Computing using Active Vision
We address one of the main challenges towards autonomous quadrotor flight in
complex environments, which is flight through narrow gaps. While previous works
relied on off-board localization systems or on accurate prior knowledge of the
gap position and orientation, we rely solely on onboard sensing and computing
and estimate the full state by fusing gap detection from a single onboard
camera with an IMU. This problem is challenging for two reasons: (i) the
quadrotor pose uncertainty with respect to the gap increases quadratically with
the distance from the gap; (ii) the quadrotor has to actively control its
orientation towards the gap to enable state estimation (i.e., active vision).
We solve this problem by generating a trajectory that considers geometric,
dynamic, and perception constraints: during the approach maneuver, the
quadrotor always faces the gap to allow state estimation, while respecting the
vehicle dynamics; during the traverse through the gap, the distance of the
quadrotor to the edges of the gap is maximized. Furthermore, we replan the
trajectory during its execution to cope with the varying uncertainty of the
state estimate. We successfully evaluate and demonstrate the proposed approach
in many real experiments. To the best of our knowledge, this is the first work
that addresses and achieves autonomous, aggressive flight through narrow gaps
using only onboard sensing and computing and without prior knowledge of the
pose of the gap
PAMPC: Perception-Aware Model Predictive Control for Quadrotors
We present the first perception-aware model predictive control framework for
quadrotors that unifies control and planning with respect to action and
perception objectives. Our framework leverages numerical optimization to
compute trajectories that satisfy the system dynamics and require control
inputs within the limits of the platform. Simultaneously, it optimizes
perception objectives for robust and reliable sens- ing by maximizing the
visibility of a point of interest and minimizing its velocity in the image
plane. Considering both perception and action objectives for motion planning
and control is challenging due to the possible conflicts arising from their
respective requirements. For example, for a quadrotor to track a reference
trajectory, it needs to rotate to align its thrust with the direction of the
desired acceleration. However, the perception objective might require to
minimize such rotation to maximize the visibility of a point of interest. A
model-based optimization framework, able to consider both perception and action
objectives and couple them through the system dynamics, is therefore necessary.
Our perception-aware model predictive control framework works in a
receding-horizon fashion by iteratively solving a non-linear optimization
problem. It is capable of running in real-time, fully onboard our lightweight,
small-scale quadrotor using a low-power ARM computer, to- gether with a
visual-inertial odometry pipeline. We validate our approach in experiments
demonstrating (I) the contradiction between perception and action objectives,
and (II) improved behavior in extremely challenging lighting conditions
Deep Drone Racing: From Simulation to Reality with Domain Randomization
Dynamically changing environments, unreliable state estimation, and operation
under severe resource constraints are fundamental challenges that limit the
deployment of small autonomous drones. We address these challenges in the
context of autonomous, vision-based drone racing in dynamic environments. A
racing drone must traverse a track with possibly moving gates at high speed. We
enable this functionality by combining the performance of a state-of-the-art
planning and control system with the perceptual awareness of a convolutional
neural network (CNN). The resulting modular system is both platform- and
domain-independent: it is trained in simulation and deployed on a physical
quadrotor without any fine-tuning. The abundance of simulated data, generated
via domain randomization, makes our system robust to changes of illumination
and gate appearance. To the best of our knowledge, our approach is the first to
demonstrate zero-shot sim-to-real transfer on the task of agile drone flight.
We extensively test the precision and robustness of our system, both in
simulation and on a physical platform, and show significant improvements over
the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics
Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854
- …