120 research outputs found

    Construction and Calibration of a Low-Cost 3D Laser Scanner with 360â—¦ Field of View for Mobile Robots

    Get PDF
    Navigation of many mobile robots relies on environmental information obtained from three-dimensional (3D) laser scanners. This paper presents a new 360◦ field-of-view 3D laser scanner for mobile robots that avoids the high cost of commercial devices. The 3D scanner is based on spinning a Hokuyo UTM- 30LX-EX two-dimensional (2D) rangefinder around its optical center. The proposed design profits from lessons learned with the development of a previous 3D scanner with pitching motion. Intrinsic calibration of the new device has been performed to obtain both temporal and geometric parameters. The paper also shows the integration of the 3D device in the outdoor mobile robot Andabata.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, InnovaciĂłn y Universidades | Ref. PID2019-108816RB-I0

    Dataset of Panoramic Images for People Tracking in Service Robotics

    Get PDF
    We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility.We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility

    Contributions to metric-topological localization and mapping in mobile robotics

    Get PDF
    This thesis addresses the problem of localization and mapping in mobile robotics. The ability of a robot to build a map of an unknown environment from sensory information is required to perform self-localization and autonomous navigation, as a necessary condition to carry out more complex tasks. This problem has been widely investigated in the last decades, but the solutions presented have still important limitations, mainly to cope with large scale and dynamic environments, and to work in a wider range of conditions and scenarios. In this context, this thesis takes a step forward towards highly efficient localization and mapping.A first contribution of this work is a new mapping strategy that presents two key features: the lightweight representation of world metric information, and the organization of this metric map into a topological structure that allows efficient localization and map optimization. Regarding the first issue, a map is proposed based on planar patches which are extracted from range or RGB-D images. This plane-based map (PbMap) is particularly well suited for indoor scenarios, and has the advantage of being a very compact and still a descriptive representation which is useful to perform real-time place recognition and loop closure. These operations are based on matching planar features taking into account their geometric relationships. On the other hand, the abstraction of metric information is necessary to deal with large scale SLAM and with navigation in complex environments. For that, we propose to structure the map in a metric-topological structure which is dynamically organized upon the sensor observations.  Also, a simultaneous localization and mapping (SLAM) system employing an omnidirectional RGB-D device which combines several structured-light sensors (Asus Xtion Pro Live) is presented. This device allows the quick construction of rich models of the environment at a relative low cost in comparison with previous alternatives. Our SLAM approach is based on a hierarchical structure of keyframes with a low level layer of metric information and several topological layers intended for large scale SLAM and navigation. This SLAM solution, which makes use of the metric-topological representation mentioned above, works at video frame rate obtaining highly consistent maps. Future research is expected on metric-topological-semantic mapping from the new sensor and the SLAM system presented here. Finally, an extrinsic calibration technique is proposed to obtain the relative poses of a combination of 3D range sensors, like those employed in the omnidirectional RGB-D device mentioned above. The calibration is computed from the observation of planar surfaces of a structured environment in a fast, easy and robust way, presenting qualitative and quantitative advantages with respect to previous approaches. This technique is extended to calibrate any combination of range sensors, including 2D and 3D range sensors, in any configuration. The calibration of such sets of sensors is interesting not only for mobile robots, but also for autonomous cars

    Global Optimality via Tight Convex Relaxations for Pose Estimation in Geometric 3D Computer Vision

    Get PDF
    In this thesis, we address a set of fundamental problems whose core difficulty boils down to optimizing over 3D poses. This includes many geometric 3D registration problems, covering well-known problems with a long research history such as the Perspective-n-Point (PnP) problem and generalizations, extrinsic sensor calibration, or even the gold standard for Structure from Motion (SfM) pipelines: The relative pose problem from corresponding features. Likewise, this is also the case for a close relative of SLAM, Pose Graph Optimization (also commonly known as Motion Averaging in SfM). The crux of this thesis contribution revolves around the successful characterization and development of empirically tight (convex) semidefinite relaxations for many of the aforementioned core problems of 3D Computer Vision. Building upon these empirically tight relaxations, we are able to find and certify the globally optimal solution to these problems with algorithms whose performance ranges as of today from efficient, scalable approaches comparable to fast second-order local search techniques to polynomial time (worst case). So, to conclude, our research reveals that an important subset of core problems that has been historically regarded as hard and thus dealt with mostly in empirical ways, are indeed tractable with optimality guarantees.Artificial Intelligence (AI) drives a lot of services and products we use everyday. But for AI to bring its full potential into daily tasks, with technologies such as autonomous driving, augmented reality or mobile robots, AI needs to be not only intelligent but also perceptive. In particular, the ability to see and to construct an accurate model of the environment is an essential capability to build intelligent perceptive systems. The ideas developed in Computer Vision for the last decades in areas such as Multiple View Geometry or Optimization, put together to work into 3D reconstruction algorithms seem to be mature enough to nurture a range of emerging applications that already employ as of today 3D Computer Vision in the background. However, while there is a positive trend in the use of 3D reconstruction tools in real applications, there are also some fundamental limitations regarding reliability and performance guarantees that may hinder a wider adoption, e.g. in more critical applications involving people's safety such as autonomous navigation. State-of-the-art 3D reconstruction algorithms typically formulate the reconstruction problem as a Maximum Likelihood Estimation (MLE) instance, which entails solving a high-dimensional non-convex non-linear optimization problem. In practice, this is done via fast local optimization methods, that have enabled fast and scalable reconstruction pipelines, yet lack of guarantees on most of the building blocks leaving us with fundamentally brittle pipelines where no guarantees exist

    Fast, Autonomous Flight in GPS-Denied and Cluttered Environments

    Full text link
    One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field Robotic

    Obstacle avoidance for an autonomous Rover

    Get PDF
    This project presents the improvement of an autnonomous rover to make it capable of performing 3D obstacle detection and avoidance. To do so, the previous architecture has been renovated with new hardware and software. A calibration of the new sensor and validation tests have been performed and presented

    Using a Deep Learning Model on Images to Obtain a 2D Laser People Detector for a Mobile Robot

    Get PDF
    Recent improvements in deep learning techniques applied to images allow the detection of people with a high success rate. However, other types of sensors, such as laser rangefinders, are still useful due to their wide field of vision and their ability to operate in different environments and lighting conditions. In this work we use an interesting computational intelligence technique such as the deep learning method to detect people in images taken by a mobile robot. The masks of the people in the images are used to automatically label a set of samples formed by 2D laser range data that will allow us to detect the legs of people present in the scene. The samples are geometric characteristics of the clusters built from the laser data. The machine learning algorithms are used to learn a classifier that is capable of detecting people from only 2D laser range data. Our people detector is compared to a state-of-the-art classifier. Our proposal achieves a higher value of F1 in the test set using an unbalanced dataset. To improve accuracy, the final classifier has been generated from a balanced training set. This final classifier has also been evaluated using a test set in which we have obtained very high accuracy values in each class. The contribution of this work is 2-fold. On the one hand, our proposal performs an automatic labeling of the samples so that the dataset can be collected under real operating conditions. On the other hand, the robot can detect people in a wider field of view than if we only used a camera, and in this way can help build more robust behaviors.This work has been supported by the Spanish Government TIN2016- 76515-R Grant, supported with Feder funds
    • …
    corecore