18,758 research outputs found
Virtual-to-Real-World Transfer Learning for Robots on Wilderness Trails
Robots hold promise in many scenarios involving outdoor use, such as
search-and-rescue, wildlife management, and collecting data to improve
environment, climate, and weather forecasting. However, autonomous navigation
of outdoor trails remains a challenging problem. Recent work has sought to
address this issue using deep learning. Although this approach has achieved
state-of-the-art results, the deep learning paradigm may be limited due to a
reliance on large amounts of annotated training data. Collecting and curating
training datasets may not be feasible or practical in many situations,
especially as trail conditions may change due to seasonal weather variations,
storms, and natural erosion. In this paper, we explore an approach to address
this issue through virtual-to-real-world transfer learning using a variety of
deep learning models trained to classify the direction of a trail in an image.
Our approach utilizes synthetic data gathered from virtual environments for
model training, bypassing the need to collect a large amount of real images of
the outdoors. We validate our approach in three main ways. First, we
demonstrate that our models achieve classification accuracies upwards of 95% on
our synthetic data set. Next, we utilize our classification models in the
control system of a simulated robot to demonstrate feasibility. Finally, we
evaluate our models on real-world trail data and demonstrate the potential of
virtual-to-real-world transfer learning.Comment: iROS 201
Comparative Study of Different Methods in Vibration-Based Terrain Classification for Wheeled Robots with Shock Absorbers
open access articleAutonomous robots that operate in the field can enhance their security and efficiency by
accurate terrain classification, which can be realized by means of robot-terrain interaction-generated
vibration signals. In this paper, we explore the vibration-based terrain classification (VTC),
in particular for a wheeled robot with shock absorbers. Because the vibration sensors are
usually mounted on the main body of the robot, the vibration signals are dampened significantly,
which results in the vibration signals collected on different terrains being more difficult to
discriminate. Hence, the existing VTC methods applied to a robot with shock absorbers may degrade.
The contributions are two-fold: (1) Several experiments are conducted to exhibit the performance of
the existing feature-engineering and feature-learning classification methods; and (2) According to
the long short-term memory (LSTM) network, we propose a one-dimensional convolutional LSTM
(1DCL)-based VTC method to learn both spatial and temporal characteristics of the dampened
vibration signals. The experiment results demonstrate that: (1) The feature-engineering methods,
which are efficient in VTC of the robot without shock absorbers, are not so accurate in our project;
meanwhile, the feature-learning methods are better choices; and (2) The 1DCL-based VTC method
outperforms the conventional methods with an accuracy of 80.18%, which exceeds the second method
(LSTM) by 8.23%
Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection
Selective weeding is one of the key challenges in the field of agriculture
robotics. To accomplish this task, a farm robot should be able to accurately
detect plants and to distinguish them between crop and weeds. Most of the
promising state-of-the-art approaches make use of appearance-based models
trained on large annotated datasets. Unfortunately, creating large agricultural
datasets with pixel-level annotations is an extremely time consuming task,
actually penalizing the usage of data-driven techniques. In this paper, we face
this problem by proposing a novel and effective approach that aims to
dramatically minimize the human intervention needed to train the detection and
classification algorithms. The idea is to procedurally generate large synthetic
training datasets randomizing the key features of the target environment (i.e.,
crop and weed species, type of soil, light conditions). More specifically, by
tuning these model parameters, and exploiting a few real-world textures, it is
possible to render a large amount of realistic views of an artificial
agricultural scenario with no effort. The generated data can be directly used
to train the model or to supplement real-world images. We validate the proposed
methodology by using as testbed a modern deep learning based image segmentation
architecture. We compare the classification results obtained using both real
and synthetic images as training data. The reported results confirm the
effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Feature discovery and visualization of robot mission data using convolutional autoencoders and Bayesian nonparametric topic models
The gap between our ability to collect interesting data and our ability to
analyze these data is growing at an unprecedented rate. Recent algorithmic
attempts to fill this gap have employed unsupervised tools to discover
structure in data. Some of the most successful approaches have used
probabilistic models to uncover latent thematic structure in discrete data.
Despite the success of these models on textual data, they have not generalized
as well to image data, in part because of the spatial and temporal structure
that may exist in an image stream.
We introduce a novel unsupervised machine learning framework that
incorporates the ability of convolutional autoencoders to discover features
from images that directly encode spatial information, within a Bayesian
nonparametric topic model that discovers meaningful latent patterns within
discrete data. By using this hybrid framework, we overcome the fundamental
dependency of traditional topic models on rigidly hand-coded data
representations, while simultaneously encoding spatial dependency in our topics
without adding model complexity. We apply this model to the motivating
application of high-level scene understanding and mission summarization for
exploratory marine robots. Our experiments on a seafloor dataset collected by a
marine robot show that the proposed hybrid framework outperforms current
state-of-the-art approaches on the task of unsupervised seafloor terrain
characterization.Comment: 8 page
The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah
(ABRIDGED) In previous work, two platforms have been developed for testing
computer-vision algorithms for robotic planetary exploration (McGuire et al.
2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been
tested at geological and astrobiological field sites in Spain (Rivas
Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a
geological field site in Malta. In this work, we (i) apply a Hopfield
neural-network algorithm for novelty detection based upon color, (ii) integrate
a field-capable digital microscope on the wearable computer platform, (iii)
test this novelty detection with the digital microscope at Rivas Vaciamadrid,
(iv) develop a Bluetooth communication mode for the phone-camera platform, in
order to allow access to a mobile processing computer at the field sites, and
(v) test the novelty detection on the Bluetooth-enabled phone-camera connected
to a netbook computer at the Mars Desert Research Station in Utah. This systems
engineering and field testing have together allowed us to develop a real-time
computer-vision system that is capable, for example, of identifying lichens as
novel within a series of images acquired in semi-arid desert environments. We
acquired sequences of images of geologic outcrops in Utah and Spain consisting
of various rock types and colors to test this algorithm. The algorithm robustly
recognized previously-observed units by their color, while requiring only a
single image or a few images to learn colors as familiar, demonstrating its
fast learning capability.Comment: 28 pages, 12 figures, accepted for publication in the International
Journal of Astrobiolog
PanDA: Panoptic Data Augmentation
The recently proposed panoptic segmentation task presents a significant challenge of image understanding with computer vision by unifying semantic segmentation and instance segmentation tasks. In this paper we present an efficient and novel panoptic data augmentation (PanDA) method which operates exclusively in pixel space, requires no additional data or training, and is computationally cheap to implement. By retraining original state-of-the-art models on PanDA augmented datasets generated with a single frozen set of parameters, we show robust performance gains in panoptic segmentation, instance segmentation, as well as detection across models, backbones, dataset domains, and scales. Finally, the effectiveness of unrealistic-looking training images synthesized by PanDA suggest that one should rethink the need for image realism for efficient data augmentation
- …