8,115 research outputs found
Towards an Autonomous Walking Robot for Planetary Surfaces
In this paper, recent progress in the development of
the DLR Crawler - a six-legged, actively compliant walking
robot prototype - is presented. The robot implements
a walking layer with a simple tripod and a more complex
biologically inspired gait. Using a variety of proprioceptive
sensors, different reflexes for reactively crossing obstacles
within the walking height are realised. On top of
the walking layer, a navigation layer provides the ability
to autonomously navigate to a predefined goal point in
unknown rough terrain using a stereo camera. A model
of the environment is created, the terrain traversability is
estimated and an optimal path is planned. The difficulty
of the path can be influenced by behavioral parameters.
Motion commands are sent to the walking layer and the
gait pattern is switched according to the estimated terrain
difficulty. The interaction between walking layer and navigation
layer was tested in different experimental setups
UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether
This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle
(UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a
flying sensor but also as a tether attachment device. Two robots are connected
with a tether, allowing the UAV to anchor the tether to a structure located at
the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the
poor traversability of the UGV by not only providing a wider range of scanning
and mapping from the air, but also by allowing the UGV to climb steep terrains
with the winding of the tether. In addition, we present an autonomous framework
for the collaborative navigation and tether attachment in an unknown
environment. The UAV employs visual inertial navigation with 3D voxel mapping
and obstacle avoidance planning. The UGV makes use of the voxel map and
generates an elevation map to execute path planning based on a traversability
analysis. Furthermore, we compared the pros and cons of possible methods for
the tether anchoring from multiple points of view. To increase the probability
of successful anchoring, we evaluated the anchoring strategy with an
experiment. Finally, the feasibility and capability of our proposed system were
demonstrated by an autonomous mission experiment in the field with an obstacle
and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on
Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1
Fast and Continuous Foothold Adaptation for Dynamic Locomotion through CNNs
Legged robots can outperform wheeled machines for most navigation tasks
across unknown and rough terrains. For such tasks, visual feedback is a
fundamental asset to provide robots with terrain-awareness. However, robust
dynamic locomotion on difficult terrains with real-time performance guarantees
remains a challenge. We present here a real-time, dynamic foothold adaptation
strategy based on visual feedback. Our method adjusts the landing position of
the feet in a fully reactive manner, using only on-board computers and sensors.
The correction is computed and executed continuously along the swing phase
trajectory of each leg. To efficiently adapt the landing position, we implement
a self-supervised foothold classifier based on a Convolutional Neural Network
(CNN). Our method results in an up to 200 times faster computation with respect
to the full-blown heuristics. Our goal is to react to visual stimuli from the
environment, bridging the gap between blind reactive locomotion and purely
vision-based planning strategies. We assess the performance of our method on
the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds
up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe
foothold adaptation is clearly demonstrated by the overall robot behavior.Comment: 9 pages, 11 figures. Accepted to RA-L + ICRA 2019, January 201
On Advanced Mobility Concepts for Intelligent Planetary Surface Exploration
Surface exploration by wheeled rovers on Earth's Moon (the two Lunokhods) and Mars (Nasa's Sojourner and the two MERs) have been followed since many years already very suc-cessfully, specifically concerning operations over long time. However, despite of this success, the explored surface area was very small, having in mind a total driving distance of about 8 km (Spirit) and 21 km (Opportunity) over 6 years of operation. Moreover, ESA will send its ExoMars rover in 2018 to Mars, and NASA its MSL rover probably this year. However, all these rovers are lacking sufficient on-board intelligence in order to overcome longer dis-tances, driving much faster and deciding autonomously on path planning for the best trajec-tory to follow. In order to increase the scientific output of a rover mission it seems very nec-essary to explore much larger surface areas reliably in much less time. This is the main driver for a robotics institute to combine mechatronics functionalities to develop an intelligent mo-bile wheeled rover with four or six wheels, and having specific kinematics and locomotion suspension depending on the operational terrain of the rover to operate. DLR's Robotics and Mechatronics Center has a long tradition in developing advanced components in the field of light-weight motion actuation, intelligent and soft manipulation and skilled hands and tools, perception and cognition, and in increasing the autonomy of any kind of mechatronic systems. The whole design is supported and is based upon detailed modeling, optimization, and simula-tion tasks. We have developed efficient software tools to simulate the rover driveability per-formance on various terrain characteristics such as soft sandy and hard rocky terrains as well as on inclined planes, where wheel and grouser geometry plays a dominant role. Moreover, rover optimization is performed to support the best engineering intuitions, that will optimize structural and geometric parameters, compare various kinematics suspension concepts, and make use of realistic cost functions like mass and consumed energy minimization, static sta-bility, and more. For self-localization and safe navigation through unknown terrain we make use of fast 3D stereo algorithms that were successfully used e.g. in unmanned air vehicle ap-plications and on terrestrial mobile systems. The advanced rover design approach is applica-ble for lunar as well as Martian surface exploration purposes. A first mobility concept ap-proach for a lunar vehicle will be presented
Robust Legged Robot State Estimation Using Factor Graph Optimization
Legged robots, specifically quadrupeds, are becoming increasingly attractive
for industrial applications such as inspection. However, to leave the
laboratory and to become useful to an end user requires reliability in harsh
conditions. From the perspective of state estimation, it is essential to be
able to accurately estimate the robot's state despite challenges such as uneven
or slippery terrain, textureless and reflective scenes, as well as dynamic
camera occlusions. We are motivated to reduce the dependency on foot contact
classifications, which fail when slipping, and to reduce position drift during
dynamic motions such as trotting. To this end, we present a factor graph
optimization method for state estimation which tightly fuses and smooths
inertial navigation, leg odometry and visual odometry. The effectiveness of the
approach is demonstrated using the ANYmal quadruped robot navigating in a
realistic outdoor industrial environment. This experiment included trotting,
walking, crossing obstacles and ascending a staircase. The proposed approach
decreased the relative position error by up to 55% and absolute position error
by 76% compared to kinematic-inertial odometry.Comment: 8 pages, 12 figures. Accepted to RA-L + IROS 2019, July 201
Featureless visual processing for SLAM in changing outdoor environments
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features
Robots for Exploration, Digital Preservation and Visualization of Archeological Sites
Monitoring and conservation of archaeological sites
are important activities necessary to prevent damage or to
perform restoration on cultural heritage. Standard techniques,
like mapping and digitizing, are typically used to document the
status of such sites. While these task are normally accomplished
manually by humans, this is not possible when dealing with
hard-to-access areas. For example, due to the possibility of
structural collapses, underground tunnels like catacombs are
considered highly unstable environments. Moreover, they are full
of radioactive gas radon that limits the presence of people only
for few minutes. The progress recently made in the artificial
intelligence and robotics field opened new possibilities for mobile
robots to be used in locations where humans are not allowed
to enter. The ROVINA project aims at developing autonomous
mobile robots to make faster, cheaper and safer the monitoring of
archaeological sites. ROVINA will be evaluated on the catacombs
of Priscilla (in Rome) and S. Gennaro (in Naples)
Efficient Humanoid Contact Planning using Learned Centroidal Dynamics Prediction
Humanoid robots dynamically navigate an environment by interacting with it
via contact wrenches exerted at intermittent contact poses. Therefore, it is
important to consider dynamics when planning a contact sequence. Traditional
contact planning approaches assume a quasi-static balance criterion to reduce
the computational challenges of selecting a contact sequence over a rough
terrain. This however limits the applicability of the approach when dynamic
motions are required, such as when walking down a steep slope or crossing a
wide gap. Recent methods overcome this limitation with the help of efficient
mixed integer convex programming solvers capable of synthesizing dynamic
contact sequences. Nevertheless, its exponential-time complexity limits its
applicability to short time horizon contact sequences within small
environments. In this paper, we go beyond current approaches by learning a
prediction of the dynamic evolution of the robot centroidal momenta, which can
then be used for quickly generating dynamically robust contact sequences for
robots with arms and legs using a search-based contact planner. We demonstrate
the efficiency and quality of the results of the proposed approach in a set of
dynamically challenging scenarios
- âŠ