27 research outputs found

    On the Enhancement of the Localization of Autonomous Mobile Platforms

    Get PDF
    The focus of many industrial and research entities on achieving full robotic autonomy increased in the past few years. In order to achieve full robotic autonomy, a fundamental problem is the localization, which is the ability of a mobile platform to determine its position and orientation in the environment. In this thesis, several problems related to the localization of autonomous platforms are addressed, namely, visual odometry accuracy and robustness; uncertainty estimation in odometries; and accurate multi-sensor fusion-based localization. Beside localization, the control of mobile manipulators is also tackled in this thesis. First, a generic image processing pipeline is proposed which, when integrated with a feature-based Visual Odometry (VO), can enhance robustness, accuracy and reduce the accumulation of errors (drift) in the pose estimation. Afterwards, since odometries (e.g. wheel odometry, LiDAR odometry, or VO) suffer from drift errors due to integration, and because such errors need to be quantified in order to achieve accurate localization through multi-sensor fusion schemes (e.g. extended or unscented kalman filters). A covariance estimation algorithm is proposed, which estimates the uncertainty of odometry measurements using another sensor which does not rely on integration. Furthermore, optimization-based multi-sensor fusion techniques are known to achieve better localization results compared to filtering techniques, but with higher computational cost. Consequently, an efficient and generic multi-sensor fusion scheme, based on Moving Horizon Estimation (MHE), is developed. The proposed multi-sensor fusion scheme: is capable of operating with any number of sensors; and considers different sensors measurements rates, missing measurements, and outliers. Moreover, the proposed multi-sensor scheme is based on a multi-threading architecture, in order to reduce its computational cost, making it more feasible for practical applications. Finally, the main purpose of achieving accurate localization is navigation. Hence, the last part of this thesis focuses on developing a stabilization controller of a 10-DOF mobile manipulator based on Model Predictive Control (MPC). All of the aforementioned works are validated using numerical simulations; real data from: EU Long-term Dataset, KITTI Dataset, TUM Dataset; and/or experimental sequences using an omni-directional mobile robot. The results show the efficacy and importance of each part of the proposed work

    Advanced Integration of GNSS and External Sensors for Autonomous Mobility Applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Sistema de odometría visual para localización monocular

    Get PDF
    En este documento se pretende resolver el problema de posicionamiento del robot móvil a través de la técnica de odometría visual monocular, la cual, como su propio nombre indica, utiliza una única cámara como sensor. Las cámaras son sensores de coste relativamente bajo que además tienen la ventaja de aportar gran cantidad de información sobre el entorno que rodea al robot móvil. En este proyecto se diseña un algoritmo de odometría visual monocular, pensado para tiempo real y con frecuencia de captura de imágenes relativamente alta (30 fps). En el estado del arte se describen las diferentes técnicas existentes para abordar la odometría visual monocular. Discutiremos el modelo matemático utilizado para proyectar puntos tridimensionales del mundo real en el plano bidimensional de la imagen, los diversos medios para extraer información relevante de una imagen y la forma de emplear dicha información para obtener la trayectoria recorrida por el robot móvil. En el tercer apartado se hará hincapié en el método propuesto para resolver el problema de odometría visual monocular. Tras esto, se propone un cuarto apartado en el que se analizan los resultados arrojados por el algoritmo propuesto. Finalmente se proponen una serie de mejoras de cara a ser implantadas en un futuro, ya que este trabajo está basado en un área de estudio que actualmente se encuentra en investigación.Universidad de Sevilla. Grado en Ingeniería de las Tecnologías Industriale

    Robotic 3D Reconstruction Utilising Structure from Motion

    Get PDF
    Sensing the real-world is a well-established and continual problem in the field of robotics. Investigations into autonomous aerial and underwater vehicles have extended this challenge into sensing, mapping and localising in three dimensions. This thesis seeks to understand and tackle the challenges of recovering 3D information from an environment using vision alone. There is a well-established literature on the principles of doing this, and some impressive demonstrations; but this thesis explores the practicality of doing vision-based 3D reconstruction using multiple, mobile robotic platforms, the emphasis being on producing accurate 3D models. Typically, robotic platforms such as UAVs have a single on-board camera, restricting which method of visual 3D recovery can be employed. This thesis specifically explores Structure from Motion, a monocular 3D reconstruction technique which produces detailed and accurate, although slow to calculate, 3D reconstructions. It examines how well proof-of-concept demonstrations translate onto the kinds of robotic systems that are commonly deployed in the real world, where local processing is limited and network links have restricted capacity. In order to produce accurate 3D models, it is necessary to use high-resolution imagery, and the difficulties of working with this on remote robotic platforms is explored in some detail

    Bioinspired symmetry detection on resource limited embedded platforms

    Get PDF
    This work is inspired by the vision of flying insects which enables them to detect and locate a set of relevant objects with remarkable effectiveness despite very limited brainpower. The bioinspired approach worked out here focuses on detection of symmetric objects to be performed by resource-limited embedded platforms such as micro air vehicles. Symmetry detection is posed as a pattern matching problem which is solved by an approach based on the use of composite correlation filters. Two variants of the approach are proposed, analysed and tested in which symmetry detection is cast as 1) static and 2) dynamic pattern matching problems. In the static variant, images of objects are input to two dimentional spatial composite correlation filters. In the dynamic variant, a video (resulting from platform motion) is input to a composite correlation filter of which its peak response is used to define symmetry. In both cases, a novel method is used for designing the composite filter templates for symmetry detection. This method significantly reduces the level of detail which needs to be matched to achieve good detection performance. The resulting performance is systematically quantified using the ROC analysis; it is demonstrated that the bioinspired detection approach is better and with a lower computational cost compared to the best state-of-the-art solution hitherto available

    Development of GNSS/INS/SLAM Algorithms for Navigation in Constrained Environments

    Get PDF
    For land vehicles, the requirements of the navigation solution in terms of accuracy, integrity, continuity and availability are more and more stringent, especially with the development of autonomous vehicles. This type of application requires a navigation system not only capable of providing an accurate and reliable position, velocity and attitude solution continuously but also having a reasonable cost. In the last decades, GNSS has been the most widely used navigation system especially with the receivers decreasing cost over the years. However, despite of its capability to provide absolute navigation information with long time accuracy, this system suffers from problems related to signal propagation especially in urban environments where buildings, trees and other structures hinder the reception of GNSS signals and degrade their quality. This can result in significant positioning error exceeding in some cases a kilometer. Many techniques are proposed in the literature to mitigate these problems and improve the GNSS accuracy. Unfortunately, all these techniques have limitations. A possible way to overcome these problems is to fuse “good” GNSS measurements with other sensors having complementary characteristics. In fact, by exploiting the complementarity of sensors, hybridization algorithms can improve the navigation solution compared to solutions provided by each stand-alone sensor. Generally, the most widely implemented hybridization algorithms for land vehicles fuse GNSS measurements with inertial and/or odometric data. Thereby, these Dead-Reckoning (DR) sensors ensure the system continuity when GNSS information is unavailable and improve the system performance when GNSS signals are degraded, and, in return the GNSS limits the drift of the DR solution if it is available. However the performance achieved by this hybridization depends thoroughly on the quality of the DR sensor used especially when GNSS signals are degraded or unavailable. Therefore, this Ph.D. thesis, which is part of a common French research project involving two laboratories and three companies, aims at extending the classical hybridization architecture by including other sensors capable of improving the navigation performances while having a low cost and being easily embeddable. For this reason, the use of vision-based navigation techniques to provide additional information is proposed in this thesis. In fact, cameras have become an attractive positioning sensor recently with the development of Visual Odometry and Simultaneous Localization and Mapping (SLAM) techniques, capable of providing accurate navigation solution while having reasonable cost. In addition, visual navigation solutions have a good quality in textured environments where GNSS is likely to encounter bad performance. Therefore, this work focuses on developing a multi-sensor fusion architecture integrating visual information with the previously mentioned sensors. In particular, the contribution of this information to improve the vision-free navigation system performance is highlighted. The proposed architecture respects the project constraints consisting of developing a versatile and modular low-cost system capable of providing continuously a good navigation solution, where each sensor may be easily discarded when its information should not be used in the navigation solutio

    Combined visual odometry and visual compass for off-road mobile robots localization

    Get PDF
    In this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at the surrounding environment. Comparisons with popular localization approaches, through physical experiments in off-road conditions, have shown the satisfactory behavior of the proposed strateg

    Application of computer vision for roller operation management

    Get PDF
    Compaction is the last and possibly the most important phase in construction of asphalt concrete (AC) pavements. Compaction densifies the loose (AC) mat, producing a stable surface with low permeability. The process strongly affects the AC performance properties. Too much compaction may cause aggregate degradation and low air void content facilitating bleeding and rutting. On the other hand too little compaction may result in higher air void content facilitating oxidation and water permeability issues, rutting due to further densification by traffic and reduced fatigue life. Therefore, compaction is a critical issue in AC pavement construction.;The common practice for compacting a mat is to establish a roller pattern that determines the number of passes and coverages needed to achieve the desired density. Once the pattern is established, the roller\u27s operator must maintain the roller pattern uniformly over the entire mat.;Despite the importance of uniform compaction to achieve the expected durability and performance of AC pavements, having the roller operator as the only mean to manage the operation can involve human errors.;With the advancement of technology in recent years, the concept of intelligent compaction (IC) was developed to assist the roller operators and improve the construction quality. Commercial IC packages for construction rollers are available from different manufacturers. They can provide precise mapping of a roller\u27s location and provide the roller operator with feedback during the compaction process.;Although, the IC packages are able to track the roller passes with impressive results, there are also major hindrances. The high cost of acquisition and potential negative impact on productivity has inhibited implementation of IC.;This study applied computer vision technology to build a versatile and affordable system to count and map roller passes. An infrared camera is mounted on top of the roller to capture the operator view. Then, in a near real-time process, image features were extracted and tracked to estimate the incremental rotation and translation of the roller. Image featured are categorized into near and distant features based on the user defined horizon. The optical flow is estimated for near features located in the region below the horizon. The change in roller\u27s heading is constantly estimated from the distant features located in the sky region. Using the roller\u27s rotation angle, the incremental translation between two frames will be calculated from the optical flow. The roller\u27s incremental rotation and translation will put together to develop a tracking map.;During system development, it was noted that in environments with thermal uniformity, the background of the IR images exhibit less featured as compared to images captured with optical cameras which are insensitive to temperature. This issue is more significant overnight, since nature elements are not able to reflect the heat energy from sun. Therefore to improve roller\u27s heading estimation where less features are available in the sky region a unique methodology that allows heading detection based on the asphalt mat edges was developed for this research. The heading measurements based on the slope of the asphalt hot edges will be added to the pool of the headings measured from sky region. The median of all heading measurements will be used as the incremental roller\u27s rotation for the tracking analysis.;The record of tracking data is used for QC/QA purposes and verifying the proper implementation of the roller pattern throughout a job constructed under the roller pass specifications.;The system developed during this research was successful in mapping roller location for few projects tested. However the system should be independently validated

    Optical flow templates for mobile robot environment understanding

    Get PDF
    In this work we develop optical flow templates. In doing so, we introduce a practical tool for inferring robot egomotion and semantic superpixel labeling using optical flow in imaging systems with arbitrary optics. In order to do this we develop valuable understanding of geometric relationships and mathematical methods that are useful in interpreting optical flow to the robotics and computer vision communities. This work is motivated by what we perceive as directions for advancing the current state of the art in obstacle detection and scene understanding for mobile robots. Specifically, many existing methods build 3D point clouds, which are not directly useful for autonomous navigation and require further processing. Both the step of building the point clouds and the later processing steps are challenging and computationally intensive. Additionally, many current methods require a calibrated camera, which introduces calibration challenges and places limitations on the types of camera optics that may be used. Wide-angle lenses, systems with mirrors, and multiple cameras all require different calibration models and can be difficult or impossible to calibrate at all. Finally, current pixel and superpixel obstacle labeling algorithms typically rely on image appearance. While image appearance is informative, image motion is a direct effect of the scene structure that determines whether a region of the environment is an obstacle. The egomotion estimation and obstacle labeling methods we develop here based on optical flow templates require very little computation per frame and do not require building point clouds. Additionally, they do not require any specific type of camera optics, nor a calibrated camera. Finally, they label obstacles using optical flow alone without image appearance. In this thesis we start with optical flow subspaces for egomotion estimation and detection of “motion anomalies”. We then extend this to multiple subspaces and develop mathematical reasoning to select between them, comprising optical flow templates. Using these we classify environment shapes and label superpixels. Finally, we show how performing all learning and inference directly from image spatio-temporal gradients greatly improves computation time and accuracy.Ph.D
    corecore