1,472 research outputs found
RGB-D-based Stair Detection using Deep Learning for Autonomous Stair Climbing
Stairs are common building structures in urban environments, and stair
detection is an important part of environment perception for autonomous mobile
robots. Most existing algorithms have difficulty combining the visual
information from binocular sensors effectively and ensuring reliable detection
at night and in the case of extremely fuzzy visual clues. To solve these
problems, we propose a neural network architecture with RGB and depth map
inputs. Specifically, we design a selective module, which can make the network
learn the complementary relationship between the RGB map and the depth map and
effectively combine the information from the RGB map and the depth map in
different scenes. In addition, we design a line clustering algorithm for the
postprocessing of detection results, which can make full use of the detection
results to obtain the geometric stair parameters. Experiments on our dataset
show that our method can achieve better accuracy and recall compared with
existing state-of-the-art deep learning methods, which are 5.64% and 7.97%,
respectively, and our method also has extremely fast detection speed. A
lightweight version can achieve 300 + frames per second with the same
resolution, which can meet the needs of most real-time detection scenes
High-level environment representations for mobile robots
In most robotic applications we are faced with the problem of building
a digital representation of the environment that allows the robot to
autonomously complete its tasks. This internal representation can be
used by the robot to plan a motion trajectory for its mobile base
and/or end-effector. For most man-made environments we do not have
a digital representation or it is inaccurate. Thus, the robot must
have the capability of building it autonomously. This is done by
integrating into an internal data structure incoming sensor
measurements. For this purpose, a common solution consists in solving
the Simultaneous Localization and Mapping (SLAM) problem. The map
obtained by solving a SLAM problem is called ``metric'' and it
describes the geometric structure of the environment. A metric map is
typically made up of low-level primitives (like points or
voxels). This means that even though it represents the shape of the
objects in the robot workspace it lacks the information of which
object a surface belongs to. Having an object-level representation of
the environment has the advantage of augmenting the set of possible
tasks that a robot may accomplish. To this end, in this thesis we
focus on two aspects. We propose a formalism to represent in a uniform
manner 3D scenes consisting of different geometric primitives,
including points, lines and planes. Consequently, we derive a local
registration and a global optimization algorithm that can exploit this
representation for robust estimation. Furthermore, we present a
Semantic Mapping system capable of building an \textit{object-based}
map that can be used for complex task planning and execution. Our
system exploits effective reconstruction and recognition techniques
that require no a-priori information about the environment and can be
used under general conditions
Bounding Box-Free Instance Segmentation Using Semi-Supervised Learning for Generating a City-Scale Vehicle Dataset
Vehicle classification is a hot computer vision topic, with studies ranging
from ground-view up to top-view imagery. In remote sensing, the usage of
top-view images allows for understanding city patterns, vehicle concentration,
traffic management, and others. However, there are some difficulties when
aiming for pixel-wise classification: (a) most vehicle classification studies
use object detection methods, and most publicly available datasets are designed
for this task, (b) creating instance segmentation datasets is laborious, and
(c) traditional instance segmentation methods underperform on this task since
the objects are small. Thus, the present research objectives are: (1) propose a
novel semi-supervised iterative learning approach using GIS software, (2)
propose a box-free instance segmentation approach, and (3) provide a city-scale
vehicle dataset. The iterative learning procedure considered: (1) label a small
number of vehicles, (2) train on those samples, (3) use the model to classify
the entire image, (4) convert the image prediction into a polygon shapefile,
(5) correct some areas with errors and include them in the training data, and
(6) repeat until results are satisfactory. To separate instances, we considered
vehicle interior and vehicle borders, and the DL model was the U-net with the
Efficient-net-B7 backbone. When removing the borders, the vehicle interior
becomes isolated, allowing for unique object identification. To recover the
deleted 1-pixel borders, we proposed a simple method to expand each prediction.
The results show better pixel-wise metrics when compared to the Mask-RCNN (82%
against 67% in IoU). On per-object analysis, the overall accuracy, precision,
and recall were greater than 90%. This pipeline applies to any remote sensing
target, being very efficient for segmentation and generating datasets.Comment: 38 pages, 10 figures, submitted to journa
- …