5 research outputs found

    Modular approach for odometry localization method for vehicles with increased maneuverability

    Get PDF
    Localization and navigation not only serve to provide positioning and route guidance information for users, but also are important inputs for vehicle control. This paper investigates the possibility of using odometry to estimate the position and orientation of a vehicle with a wheel individual steering system in omnidirectional parking maneuvers. Vehicle models and sensors have been identified for this application. Several odometry versions are designed using a modular approach, which was developed in this paper to help users to design state estimators. Different odometry versions have been implemented and validated both in the simulation environment and in real driving tests. The evaluated results show that the versions using more models and using state variables in models provide both more accurate and more robust estimation

    Velocity Estimation via Wheel Circumference Identification

    Get PDF
    The article presents a velocity estimation algorithm through the wheel encoder-based odometry and wheel circumference identification. The motivation of the paper is that a proper model can improve the motion estimation in poor sensor performance cases. For example, when the GNSS signals are unavailable, or when the vision-based methods are incorrect due to the insufficient number of features, furthermore, when the IMU-based method fails due to the lack of frequent accelerations. In these situations, the wheel encoders can be an appropriate choice for state estimation. However, this type of estimation suffers from parameter uncertainty. In the paper, a wheel circumference identification is proposed to improve the velocity estimation. The algorithm listens to the incoming sensor measurements and estimates the wheel circumferences recursively with a nonlinear least squares method. The experimental results demonstrate that with the application of the identified parameters in the wheel odometry model, accurate velocity estimation can be obtained with high frequency. Thus, the presented algorithm can improve the motion estimation in the driver assistant functions of autonomous vehicles

    Approach for reducing the computational cost of environment classification systems for mobile robots

    Get PDF
    Disertační práce se věnuje problému změny prostředí v úlohách mobilní robotiky. Zaměřuje se na využití jednodimenzionálních nevizuálních senzorů za účelem redukce výpočetních nároků. V práci je představen nový systém pro detekci a klasifikaci prostředí robota založený na datech z kamery a z nevizuálních senzorů. Nevizuální senzory zde slouží jako prostředek detekce probíhající změny, která iniciuje klasifikaci prostředí pomocí kamerových dat. To může významně snížit výpočetní nároky v porovnání se situací, kdy je zpracováván každý a nebo každý n-tý snímek obrazu. Systém je otestován na případu změny prostředí mezi vnitřním a venkovním prostředím. Přínosy této práce jsou následující: (1) Představení systému pro detekci a klasifikaci prostředí mobilního robota; (2) Analýzu state-of-the-art v oblasti Simultánní Lokalizace a Mapování za účelem zjištění otevřených problémů, které je potřeba řešit; (3) Analýza nevizuálních senzorů vzhledem k jejich vhodnosti pro danou úlohu. (4) Analýza existujících metod pro detekci změny ve 2D signálu a představení dvou jednoduchých přístupů k tomuto problému; (5) Analýza state-of-the art v oblasti klasifikace prostředí se zaměřením na klasifikaci vnitřního a venkovního prostředí; (6) Experiment porovnávající metody studované v předchozím bodu. Jedná se dle mých znalostí o nejrozsáhlejší porovnání těchto metod na jednom jediném datasetu. Navíc jsou do experimentu zahrnuty také klasifikátory založené na neuronových sítích, které dosahují lepších výsledků než klasické přístupy; (7) Vytvoření datasetu pro testování navrženého systému na sestaveném 6-ti kolovém mobilním robotu. Podle mých znalostí do této doby neexistoval dataset, který by kromě dat potřebných k řešení úlohy SLAM, naíc přidával data umožňující detekci a klasifikaci prostředí i pomocí nevizuálních dat; (8) Implementace představného systému jako open-source balík pro Robot Operating System na platformě GitHub; (9) Implementace knihovny pro výpočet globálního popisovače Centrist v C++, taktéž dostupná jako open-source na platformě GitHub.ObhájenoThis dissertation thesis deals with the problem of environment changes in the tasks of mobile robotics. In particular, it focuses on using of one-dimensional non-visual sensors in order to reduce computation cost. The work presents a new system for detection and classification of the robot environment based on data from the camera and non-visual sensors. Non-visual sensors serve as detectors of ongoing change of the environment that initiates the classification of the environment using camera data. This can significantly reduce computational demands compared to a situation where every or every n-th frame of an image is processed. The system is evaluated on the case of a change of environment between indoor and outdoor environment. The contributions of this work are the following: (1) Proposed system for detection and classification of the environment of mobile robot; (2) State-of-the-art analysis in the field of Simultaneous Localization and Mapping in order to identify existing open issues that need to be addressed; (3) Analysis of non-visual sensors with respect to their suitability for solving change detection problem. (4) Analysis of existing methods for detecting changes in 2D signal and introduction of two simple approaches to this problem; (5) State-of-the-art analysis in the field of environment classification with a focus on the classification of indoor vs. outdoor environments; (6) Experiment comparing the methods studied in the previous point. To my best knowledge, this is the most extensive comparison of these methods on a single dataset. In addition, classifiers based on neural networks, which achieve better results than classical approaches, are also included in the experiment. (7) Creation of a dataset for testing the designed system on an assembled 6-wheel mobile robot. To the best of my knowledge, there has been no dataset that, in addition to the data needed to solve the SLAM task, adds data that allows the environment to be detected and classified using non-visual data. (8) Implementation of the proposed system as an open-source package for the Robot Operating System on the GitHub platform. (9) Implementation of a library for calculating the Centrist global descriptor in C++ and Python. Library is also available as open-source on the GitHub platform

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Industrial Robotics

    Get PDF
    This book covers a wide range of topics relating to advanced industrial robotics, sensors and automation technologies. Although being highly technical and complex in nature, the papers presented in this book represent some of the latest cutting edge technologies and advancements in industrial robotics technology. This book covers topics such as networking, properties of manipulators, forward and inverse robot arm kinematics, motion path-planning, machine vision and many other practical topics too numerous to list here. The authors and editor of this book wish to inspire people, especially young ones, to get involved with robotic and mechatronic engineering technology and to develop new and exciting practical applications, perhaps using the ideas and concepts presented herein
    corecore