6 research outputs found

    3D-Odometry for rough terrain - Towards real 3D navigation

    Get PDF
    Up to recently autonomous mobile robots were mostly designed to run within an indoor, yet partly structured and flat, environment. In rough terrain many problems arise and position tracking becomes more difficult. The robot has to deal with wheel slippage and large orientation changes. In this paper we will first present the recent developments on the off-road rover Shrimp. Then a new method, called 3D-Odometry, which extends the standard 2D odometry to the 3D space will be developed. Since it accounts for transitions, the 3D-Odometry provides better position estimates. It will certainly help to go towards real 3D navigation for outdoor robots

    Sensor Network Based Collision-Free Navigation and Map Building for Mobile Robots

    Full text link
    Safe robot navigation is a fundamental research field for autonomous robots including ground mobile robots and flying robots. The primary objective of a safe robot navigation algorithm is to guide an autonomous robot from its initial position to a target or along a desired path with obstacle avoidance. With the development of information technology and sensor technology, the implementations combining robotics with sensor network are focused on in the recent researches. One of the relevant implementations is the sensor network based robot navigation. Moreover, another important navigation problem of robotics is safe area search and map building. In this report, a global collision-free path planning algorithm for ground mobile robots in dynamic environments is presented firstly. Considering the advantages of sensor network, the presented path planning algorithm is developed to a sensor network based navigation algorithm for ground mobile robots. The 2D range finder sensor network is used in the presented method to detect static and dynamic obstacles. The sensor network can guide each ground mobile robot in the detected safe area to the target. Furthermore, the presented navigation algorithm is extended into 3D environments. With the measurements of the sensor network, any flying robot in the workspace is navigated by the presented algorithm from the initial position to the target. Moreover, in this report, another navigation problem, safe area search and map building for ground mobile robot, is studied and two algorithms are presented. In the first presented method, we consider a ground mobile robot equipped with a 2D range finder sensor searching a bounded 2D area without any collision and building a complete 2D map of the area. Furthermore, the first presented map building algorithm is extended to another algorithm for 3D map building

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    3D position tracking for all-terrain robots

    Get PDF
    Rough terrain robotics is a fast evolving field of research and a lot of effort is deployed towards enabling a greater level of autonomy for outdoor vehicles. Such robots find their application in scientific exploration of hostile environments like deserts, volcanoes, in the Antarctic or on other planets. They are also of high interest for search and rescue operations after natural or artificial disasters. The challenges to bring autonomy to all terrain rovers are wide. In particular, it requires the development of systems capable of reliably navigate with only partial information of the environment, with limited perception and locomotion capabilities. Amongst all the required functionalities, locomotion and position tracking are among the most critical. Indeed, the robot is not able to fulfill its task if an inappropriate locomotion concept and control is used, and global path planning fails if the rover loses track of its position. This thesis addresses both aspects, a) efficient locomotion and b) position tracking in rough terrain. The Autonomous System Lab developed an off-road rover (Shrimp) showing excellent climbing capabilities and surpassing most of the existing similar designs. Such an exceptional climbing performance enables an extension in the range of possible areas a robot could explore. In order to further improve the climbing capabilities and the locomotion efficiency, a control method minimizing wheel slip has been developed in this thesis. Unlike other control strategies, the proposed method does not require the use of soil models. Independence from these models is very significant because the ability to operate on different types of soils is the main requirement for exploration missions. Moreover, our approach can be adapted to any kind of wheeled rover and the processing power needed remains relatively low, which makes online computation feasible. In rough terrain, the problem of tracking the robot's position is tedious because of the excessive variation of the ground. Further, the field of view can vary significantly between two data acquisition cycles. In this thesis, a method for probabilistically combining different types of sensors to produce a robust motion estimation for an all-terrain rover is presented. The proposed sensor fusion scheme is flexible in that it can easily accommodate any number of sensors, of any kind. In order to test the algorithm, we have chosen to use the following sensory inputs for the experiments: 3D-Odometry, inertial measurement unit (accelerometers, gyros) and visual odometry. The 3D-Odometry has been specially developed in the framework of this research. Because it accounts for ground slope discontinuities and the rover kinematics, this technique results in a reasonably precise 3D motion estimate in rough terrain. The experiments provided excellent results and proved that the use of complementary sensors increases the robustness and accuracy of the pose estimate. In particular, this work distinguishes itself from other similar research projects in the following ways: the sensor fusion is performed with more than two sensor types and sensor fusion is applied a) in rough terrain and b) to track the real 3D pose of the rover. Another result of this work is the design of a high-performance platform for conducting further research. In particular, the rover is equipped with two computers, a stereovision module, an omnidirectional vision system, an inertial measurement unit, numerous sensors and actuators and electronics for power management. Further, a set of powerful tools has been developed to speed up the process of debugging algorithms and analyzing data stored during the experiments. Finally, the modularity and portability of the system enables easy adaptation of new actuators and sensors. All these characteristics speed up the research in this field

    Das "Surface Model" – Eine unsichere kontinuierliche Repräsentation des generischen Kameramodells und dessen Kalibrierung

    Get PDF
    Using digital cameras for measurement purposes requires the knowledge of the mapping between 3D world points and 2D positions on the image plane. There are many different mathematical models that provide this mapping for a specific imaging system. Grossberg and Nayar proposed a discrete generic camera model, which does not make any assumptions about the structure of this system. The model describes a digital camera by assigning an arbitrary viewing ray to each pixel of the camera image. This makes the model applicable to any kind of camera, especially also to non-central ones like onmidirectional catadioptrics. However, this model is difficult to use in practice, as there is no direct method for mapping a 3D point to the image or determining rays for subpixel image positions. In this work, the Surface Model, an uncertain continuous representation of the generic camera model, will be introduced. It uses a spline surface in 6D Plücker space to describe the camera. The interpolation abilities of the spline surface allow the viewing ray and its uncertainty for any (subpixel) position to be easily determined. Furthermore, the description facilitates the mapping from 3D world points to the image. The calibration of the generic model has to be performed pixel-wise and is technically involved and time-consuming. In this work, hand-held sparse planar chessboard patterns are used for calibration. The uncertainties of the corresponding image point measurements are taken into account and propagated during the complete calibration procedure to obtain an uncertain model. Simulations prove the validity of each step and the practical applicability of the procedure is shown by calibrating several real cameras of different types.Um digitale Kameras zu Vermessungszwecken einzusetzen muss der mathematische Zusammenhang zwischen 3D Weltpunkten und 2D Bildpunkten bekannt sein. Es existiert eine Vielzahl an mathematischen Modellen, welche diese Abbildung für spezifische Kamerasysteme beschreiben. Für deren Gültigkeit ist die Einhaltung der zugehörigen Randbedingungen, beispielsweise die hochgenaue Ausrichtung von Bildsensor, Linsen und Spiegeln, zwingend erforderlich. Andernfalls können fehlerhafte Messergebnisse die Folge sein. Um diese Problematik zu meiden, haben Grossberg und Nayar ein diskretes generisches Kameramodell vorgeschlagen, welches jedem einzelnen Pixel einen separaten Sehstrahl zuordnet. Somit kann jede erdenkliche Kamera beschrieben werden. Dies gilt auch für omnidirektionale catadioptrische Systeme, welche oftmals kein punktförmiges optisches Zentrum besitzen. Jedoch kann weder für jede beliebige Subpixel-Position ein Sehstrahl ermittelt werden, noch ist die Projektion eines beliebigen 3D-Punktes ins Kamerabild ohne weiteres möglich. In dieser Arbeit wird das "Surface Model" vorgestellt. Es dient als eine kontinuierliche Repräsentation des generischen Kameramodells, welche Modellunsicherheiten explizit berücksichtigt. Zur mathematischen Beschreibung wird eine Splineoberfläche im 6D Plücker-Raum genutzt. Deren Interpolationsfähigkeiten erlauben es, für jedwede Subpixel-Position direkt einen Sehstrahl zu ermitteln, sowie einen beliebigen 3D-Punkt ins Kamerabild zu projizieren. Die Kalibrierung des diskreten generischen Modells erfordert mehrere Messungen für jeden einzelnen Pixel. Um diesen aufwändigen Prozess zu vereinfachen, werden in dieser Arbeit von Hand platzierte planare Schachbrettmuster eingesetzt. Während der Kalibrierung treten unweigerlich Messungenauigkeiten auf. Beim hier vorgestellten Verfahren zur Parameterermittlung des Surface Models werden diese Unsicherheiten explizit zur Stabilisierung und Verbesserung der Genauigkeit genutzt. Dies resultiert in einem unsicheren Kameramodell, welches für die Anwendungen der Sehstrahlermittlung und der Punktprojektion Ergebnisunsicherheiten in Form von Kovarianzmatrizen zur Verfügung stellt. Mittels Simulationen wird die Anwendbarkeit sämtlicher vorgestellter Verfahren validiert. Durch die Kalibrierung verschiedener realer Kameras wird darüber hinaus deren praktische Nutzbarkeit aufgezeigt
    corecore