13,359 research outputs found
Ground Extraction from 3D Lidar Point Clouds
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
Pomares, A., MartĂnez, J.L., Mandow, A., MartĂnez, M.A., Morán, M., Morales, J. Ground extraction from 3D lidar point clouds with the Classification Learner App (2018) 26th Mediterranean Conference on Control and Automation, Zadar, Croatia, June 2018, pp.400-405. DOI: PendingGround extraction from three-dimensional (3D)
range data is a relevant problem for outdoor navigation
of unmanned ground vehicles. Even if this problem has received attention with specific heuristics and segmentation approaches, identification of ground and non-ground points can benefit from state-of-the-art classification methods, such as those included in the Matlab Classification Learner App. This paper proposes a comparative study of the machine learning methods included in this tool in terms of training times as well as in their predictive performance. With this purpose, we have combined three suitable features for ground detection, which has been applied to an urban dataset with several labeled 3D point clouds. Most of the analyzed techniques achieve good classification results, but only a few offer low training and prediction times.This work was partially supported by the Spanish project DPI 2015- 65186-R. The publication has received support
from Universidad de Málaga, Campus de Excelencia AndalucĂa Tech
Virtual-to-Real-World Transfer Learning for Robots on Wilderness Trails
Robots hold promise in many scenarios involving outdoor use, such as
search-and-rescue, wildlife management, and collecting data to improve
environment, climate, and weather forecasting. However, autonomous navigation
of outdoor trails remains a challenging problem. Recent work has sought to
address this issue using deep learning. Although this approach has achieved
state-of-the-art results, the deep learning paradigm may be limited due to a
reliance on large amounts of annotated training data. Collecting and curating
training datasets may not be feasible or practical in many situations,
especially as trail conditions may change due to seasonal weather variations,
storms, and natural erosion. In this paper, we explore an approach to address
this issue through virtual-to-real-world transfer learning using a variety of
deep learning models trained to classify the direction of a trail in an image.
Our approach utilizes synthetic data gathered from virtual environments for
model training, bypassing the need to collect a large amount of real images of
the outdoors. We validate our approach in three main ways. First, we
demonstrate that our models achieve classification accuracies upwards of 95% on
our synthetic data set. Next, we utilize our classification models in the
control system of a simulated robot to demonstrate feasibility. Finally, we
evaluate our models on real-world trail data and demonstrate the potential of
virtual-to-real-world transfer learning.Comment: iROS 201
Learning Ground Traversability from Simulations
Mobile ground robots operating on unstructured terrain must predict which
areas of the environment they are able to pass in order to plan feasible paths.
We address traversability estimation as a heightmap classification problem: we
build a convolutional neural network that, given an image representing the
heightmap of a terrain patch, predicts whether the robot will be able to
traverse such patch from left to right. The classifier is trained for a
specific robot model (wheeled, tracked, legged, snake-like) using simulation
data on procedurally generated training terrains; the trained classifier can be
applied to unseen large heightmaps to yield oriented traversability maps, and
then plan traversable paths. We extensively evaluate the approach in simulation
on six real-world elevation datasets, and run a real-robot validation in one
indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation
Comparative Study of Different Methods in Vibration-Based Terrain Classification for Wheeled Robots with Shock Absorbers
open access articleAutonomous robots that operate in the field can enhance their security and efficiency by
accurate terrain classification, which can be realized by means of robot-terrain interaction-generated
vibration signals. In this paper, we explore the vibration-based terrain classification (VTC),
in particular for a wheeled robot with shock absorbers. Because the vibration sensors are
usually mounted on the main body of the robot, the vibration signals are dampened significantly,
which results in the vibration signals collected on different terrains being more difficult to
discriminate. Hence, the existing VTC methods applied to a robot with shock absorbers may degrade.
The contributions are two-fold: (1) Several experiments are conducted to exhibit the performance of
the existing feature-engineering and feature-learning classification methods; and (2) According to
the long short-term memory (LSTM) network, we propose a one-dimensional convolutional LSTM
(1DCL)-based VTC method to learn both spatial and temporal characteristics of the dampened
vibration signals. The experiment results demonstrate that: (1) The feature-engineering methods,
which are efficient in VTC of the robot without shock absorbers, are not so accurate in our project;
meanwhile, the feature-learning methods are better choices; and (2) The 1DCL-based VTC method
outperforms the conventional methods with an accuracy of 80.18%, which exceeds the second method
(LSTM) by 8.23%
Learning Image-Conditioned Dynamics Models for Control of Under-actuated Legged Millirobots
Millirobots are a promising robotic platform for many applications due to
their small size and low manufacturing costs. Legged millirobots, in
particular, can provide increased mobility in complex environments and improved
scaling of obstacles. However, controlling these small, highly dynamic, and
underactuated legged systems is difficult. Hand-engineered controllers can
sometimes control these legged millirobots, but they have difficulties with
dynamic maneuvers and complex terrains. We present an approach for controlling
a real-world legged millirobot that is based on learned neural network models.
Using less than 17 minutes of data, our method can learn a predictive model of
the robot's dynamics that can enable effective gaits to be synthesized on the
fly for following user-specified waypoints on a given terrain. Furthermore, by
leveraging expressive, high-capacity neural network models, our approach allows
for these predictions to be directly conditioned on camera images, endowing the
robot with the ability to predict how different terrains might affect its
dynamics. This enables sample-efficient and effective learning for locomotion
of a dynamic legged millirobot on various terrains, including gravel, turf,
carpet, and styrofoam. Experiment videos can be found at
https://sites.google.com/view/imageconddy
- …