9,798 research outputs found
Navigace mobilních robotů v neznámém prostředí s využitím měření vzdáleností
The ability of a robot to navigate itself in the environment is a crucial step towards its autonomy. Navigation as a subtask of the development of autonomous robots is the subject of this thesis, focusing on the development of a method for simultaneous localization an mapping (SLAM) of mobile robots in six degrees of freedom (DOF). As a part of this research, a platform for 3D range data acquisition based on a continuously inclined laser rangefinder was developed. This platform is presented, evaluating the measurements and also presenting the robotic equipment on which the platform can be fitted. The localization and mapping task is equal to the registration of multiple 3D images into a common frame of reference. For this purpose, a method based on the Iterative Closest Point (ICP) algorithm was developed. First, the originally implemented SLAM method is presented, focusing on the time-wise performance and the registration quality issues introduced by the implemented algorithms. In order to accelerate and improve the quality of the time-demanding 6DOF image registration, an extended method was developed. The major extension is the introduction of a factorized registration, extracting 2D representations of vertical objects called leveled maps from the 3D point sets, ensuring these representations are 3DOF invariant. The extracted representations are registered in 3DOF using ICP algorithm, allowing pre-alignment of the 3D data for the subsequent robust 6DOF ICP based registration. The extended method is presented, showing all important modifications to the original method. The developed registration method was evaluated using real 3D data acquired in different indoor environments, examining the benefits of the factorization and other extensions as well as the performance of the original ICP based method. The factorization gives promising results compared to a single phase 6DOF registration in vertically structured environments. Also, the disadvantages of the method are discussed, proposing possible solutions. Finally, the future prospects of the research are presented.Schopnost lokalizace a navigace je podmínkou autonomního provozu mobilních robotů. Předmětem této disertační práce jsou navigační metody se zaměřením na metodu pro simultánní lokalizaci a mapování (SLAM) mobilních robotů v šesti stupních volnosti (6DOF). Nedílnou součástí tohoto výzkumu byl vývoj platformy pro sběr 3D vzdálenostních dat s využitím kontinuálně naklápěného laserového řádkového scanneru. Tato platforma byla vyvinuta jako samostatný modul, aby mohla být umístěna na různé šasi mobilních robotů. Úkol lokalizace a mapování je ekvivalentní registraci více 3D obrazů do společného souřadného systému. Pro tyto účely byla vyvinuta metoda založená na algoritmu Iterative Closest Point Algorithm (ICP). Původně implementovaná verze navigační metody využívá ICP s akcelerací pomocí kd-stromů přičemž jsou zhodnoceny její kvalitativní a výkonnostní aspekty. Na základě této analýzy byly vyvinuty rozšíření původní metody založené na ICP. Jednou z hlavních modifikací je faktorizace registračního procesu, kdy tato faktorizace je založena na redukci dat: vytvoření 2D „leveled“ map (ve smyslu jednoúrovňových map) ze 3D vzdálenostních obrazů. Pro tuto redukci je technologicky i algoritmicky zajištěna invariantnost těchto map vůči třem stupňům volnosti. Tyto redukované mapy jsou registrovány pomocí ICP ve zbylých třech stupních volnosti, přičemž získaná transformace je aplikována na 3D data za účelem před-registrace 3D obrazů. Následně je provedena robustní 6DOF registrace. Rozšířená metoda je v disertační práci v popsána spolu se všemi podstatnými modifikacemi. Vyvinutá metoda byla otestována a zhodnocena s využitím skutečných 3D vzdálenostních dat naměřených v různých vnitřních prostředích. Jsou zhodnoceny přínosy faktorizace a jiných modifikací ve srovnání s původní jednofázovou 6DOF registrací, také jsou zmíněny nevýhody implementované metody a navrženy způsoby jejich řešení. Nakonec následuje návrh budoucího výzkumu a diskuse o možnostech dalšího rozvoje.
3D Reconstruction & Assessment Framework based on affordable 2D Lidar
Lidar is extensively used in the industry and mass-market. Due to its
measurement accuracy and insensitivity to illumination compared to cameras, It
is applied onto a broad range of applications, like geodetic engineering, self
driving cars or virtual reality. But the 3D Lidar with multi-beam is very
expensive, and the massive measurements data can not be fully leveraged on some
constrained platforms. The purpose of this paper is to explore the possibility
of using cheap 2D Lidar off-the-shelf, to preform complex 3D Reconstruction,
moreover, the generated 3D map quality is evaluated by our proposed metrics at
the end. The 3D map is constructed in two ways, one way in which the scan is
performed at known positions with an external rotary axis at another plane. The
other way, in which the 2D Lidar for mapping and another 2D Lidar for
localization are placed on a trolley, the trolley is pushed on the ground
arbitrarily. The generated maps by different approaches are converted to
octomaps uniformly before the evaluation. The similarity and difference between
two maps will be evaluated by the proposed metrics thoroughly. The whole
mapping system is composed of several modular components. A 3D bracket was made
for assembling of the Lidar with a long range, the driver and the motor
together. A cover platform made for the IMU and 2D Lidar with a shorter range
but high accuracy. The software is stacked up in different ROS packages.Comment: 7 pages, 9 Postscript figures. Accepted by 2018 IEEE International
Conference on Advanced Intelligent Mechatronic
Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping
Modern 3D laser-range scanners have a high data rate, making online
simultaneous localization and mapping (SLAM) computationally challenging.
Recursive state estimation techniques are efficient but commit to a state
estimate immediately after a new scan is made, which may lead to misalignments
of measurements. We present a 3D SLAM approach that allows for refining
alignments during online mapping. Our method is based on efficient local
mapping and a hierarchical optimization back-end. Measurements of a 3D laser
scanner are aggregated in local multiresolution maps by means of surfel-based
registration. The local maps are used in a multi-level graph for allocentric
mapping and localization. In order to incorporate corrections when refining the
alignment, the individual 3D scans in the local map are modeled as a sub-graph
and graph optimization is performed to account for drift and misalignments in
the local maps. Furthermore, in each sub-graph, a continuous-time
representation of the sensor trajectory allows to correct measurements between
scan poses. We evaluate our approach in multiple experiments by showing
qualitative results. Furthermore, we quantify the map quality by an
entropy-based measure.Comment: In: Proceedings of the International Conference on Robotics and
Automation (ICRA) 201
A minimalistic approach to appearance-based visual SLAM
This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a
relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple
view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses
Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation
Autonomous harvesting and transportation is a long-term goal of the forest
industry. One of the main challenges is the accurate localization of both
vehicles and trees in a forest. Forests are unstructured environments where it
is difficult to find a group of significant landmarks for current fast
feature-based place recognition algorithms. This paper proposes a novel
approach where local observations are matched to a general tree map using the
Delaunay triangularization as the representation format. Instead of point cloud
based matching methods, we utilize a topology-based method. First, tree trunk
positions are registered at a prior run done by a forest harvester. Second, the
resulting map is Delaunay triangularized. Third, a local submap of the
autonomous robot is registered, triangularized and matched using triangular
similarity maximization to estimate the position of the robot. We test our
method on a dataset accumulated from a forestry site at Lieksa, Finland. A
total length of 2100\,m of harvester path was recorded by an industrial
harvester with a 3D laser scanner and a geolocation unit fixed to the frame.
Our experiments show a 12\,cm s.t.d. in the location accuracy and with
real-time data processing for speeds not exceeding 0.5\,m/s. The accuracy and
speed limit is realistic during forest operations
Lifting GIS Maps into Strong Geometric Context for Scene Understanding
Contextual information can have a substantial impact on the performance of
visual tasks such as semantic segmentation, object detection, and geometric
estimation. Data stored in Geographic Information Systems (GIS) offers a rich
source of contextual information that has been largely untapped by computer
vision. We propose to leverage such information for scene understanding by
combining GIS resources with large sets of unorganized photographs using
Structure from Motion (SfM) techniques. We present a pipeline to quickly
generate strong 3D geometric priors from 2D GIS data using SfM models aligned
with minimal user input. Given an image resectioned against this model, we
generate robust predictions of depth, surface normals, and semantic labels. We
show that the precision of the predicted geometry is substantially more
accurate other single-image depth estimation methods. We then demonstrate the
utility of these contextual constraints for re-scoring pedestrian detections,
and use these GIS contextual features alongside object detection score maps to
improve a CRF-based semantic segmentation framework, boosting accuracy over
baseline models
Probabilistic Surfel Fusion for Dense LiDAR Mapping
With the recent development of high-end LiDARs, more and more systems are
able to continuously map the environment while moving and producing spatially
redundant information. However, none of the previous approaches were able to
effectively exploit this redundancy in a dense LiDAR mapping problem. In this
paper, we present a new approach for dense LiDAR mapping using probabilistic
surfel fusion. The proposed system is capable of reconstructing a high-quality
dense surface element (surfel) map from spatially redundant multiple views.
This is achieved by a proposed probabilistic surfel fusion along with a
geometry considered data association. The proposed surfel data association
method considers surface resolution as well as high measurement uncertainty
along its beam direction which enables the mapping system to be able to control
surface resolution without introducing spatial digitization. The proposed
fusion method successfully suppresses the map noise level by considering
measurement noise caused by laser beam incident angle and depth distance in a
Bayesian filtering framework. Experimental results with simulated and real data
for the dense surfel mapping prove the ability of the proposed method to
accurately find the canonical form of the environment without further
post-processing.Comment: Accepted in Multiview Relationships in 3D Data 2017 (IEEE
International Conference on Computer Vision Workshops
- …