5,527 research outputs found
UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether
This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle
(UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a
flying sensor but also as a tether attachment device. Two robots are connected
with a tether, allowing the UAV to anchor the tether to a structure located at
the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the
poor traversability of the UGV by not only providing a wider range of scanning
and mapping from the air, but also by allowing the UGV to climb steep terrains
with the winding of the tether. In addition, we present an autonomous framework
for the collaborative navigation and tether attachment in an unknown
environment. The UAV employs visual inertial navigation with 3D voxel mapping
and obstacle avoidance planning. The UGV makes use of the voxel map and
generates an elevation map to execute path planning based on a traversability
analysis. Furthermore, we compared the pros and cons of possible methods for
the tether anchoring from multiple points of view. To increase the probability
of successful anchoring, we evaluated the anchoring strategy with an
experiment. Finally, the feasibility and capability of our proposed system were
demonstrated by an autonomous mission experiment in the field with an obstacle
and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on
Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1
Towards an Autonomous Walking Robot for Planetary Surfaces
In this paper, recent progress in the development of
the DLR Crawler - a six-legged, actively compliant walking
robot prototype - is presented. The robot implements
a walking layer with a simple tripod and a more complex
biologically inspired gait. Using a variety of proprioceptive
sensors, different reflexes for reactively crossing obstacles
within the walking height are realised. On top of
the walking layer, a navigation layer provides the ability
to autonomously navigate to a predefined goal point in
unknown rough terrain using a stereo camera. A model
of the environment is created, the terrain traversability is
estimated and an optimal path is planned. The difficulty
of the path can be influenced by behavioral parameters.
Motion commands are sent to the walking layer and the
gait pattern is switched according to the estimated terrain
difficulty. The interaction between walking layer and navigation
layer was tested in different experimental setups
Efficient Autonomous Navigation for Planetary Rovers with Limited Resources
Rovers operating on Mars are in need of more and more autonomous features to ful ll their
challenging mission requirements. However, the inherent constraints of space systems make
the implementation of complex algorithms an expensive and difficult task. In this paper
we propose a control architecture for autonomous navigation. Efficient implementations of
autonomous features are built on top of the current ExoMars navigation method, enhancing
the safety and traversing capabilities of the rover. These features allow the rover to detect
and avoid hazards and perform long traverses by following a roughly safe path planned by
operators on ground. The control architecture implementing the proposed navigation mode
has been tested during a field test campaign on a planetary analogue terrain. The experiments
evaluated the proposed approach, autonomously completing two long traverses while
avoiding hazards. The approach only relies on the optical Localization Cameras stereobench,
a sensor that is found in all rovers launched so far, and potentially allows for computationally
inexpensive long-range autonomous navigation in terrains of medium difficulty
Coverage Path Planning with Realâtime Replanning and Surface Reconstruction for Inspection of Threeâdimensional Underwater Structures using Autonomous Underwater Vehicles
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/113717/1/rob21554.pd
Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain
In this paper we tackle the problem of visually predicting surface friction
for environments with diverse surfaces, and integrating this knowledge into
biped robot locomotion planning. The problem is essential for autonomous robot
locomotion since diverse surfaces with varying friction abound in the real
world, from wood to ceramic tiles, grass or ice, which may cause difficulties
or huge energy costs for robot locomotion if not considered. We propose to
estimate friction and its uncertainty from visual estimation of material
classes using convolutional neural networks, together with probability
distribution functions of friction associated with each material. We then
robustly integrate the friction predictions into a hierarchical (footstep and
full-body) planning method using chance constraints, and optimize the same
trajectory costs at both levels of the planning method for consistency. Our
solution achieves fully autonomous perception and locomotion on slippery
terrain, which considers not only friction and its uncertainty, but also
collision, stability and trajectory cost. We show promising friction prediction
results in real pictures of outdoor scenarios, and planning experiments on a
real robot facing surfaces with different friction
Flexible Supervised Autonomy for Exploration in Subterranean Environments
While the capabilities of autonomous systems have been steadily improving in
recent years, these systems still struggle to rapidly explore previously
unknown environments without the aid of GPS-assisted navigation. The DARPA
Subterranean (SubT) Challenge aimed to fast track the development of autonomous
exploration systems by evaluating their performance in real-world underground
search-and-rescue scenarios. Subterranean environments present a plethora of
challenges for robotic systems, such as limited communications, complex
topology, visually-degraded sensing, and harsh terrain. The presented solution
enables long-term autonomy with minimal human supervision by combining a
powerful and independent single-agent autonomy stack, with higher level mission
management operating over a flexible mesh network. The autonomy suite deployed
on quadruped and wheeled robots was fully independent, freeing the human
supervision to loosely supervise the mission and make high-impact strategic
decisions. We also discuss lessons learned from fielding our system at the SubT
Final Event, relating to vehicle versatility, system adaptability, and
re-configurable communications.Comment: Field Robotics special issue: DARPA Subterranean Challenge,
Advancement and Lessons Learned from the Final
- âŠ