135 research outputs found

    Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System

    Get PDF
    Autonomously operating UAVs demand a fast localization for navigation, to actively explore unknown areas and to create maps. For pose estimation, many UAV systems make use of a combination of GPS receivers and inertial sensor units (IMU). However, GPS signal coverage may go down occasionally, especially in the close vicinity of objects, and precise IMUs are too heavy to be carried by lightweight UAVs. This and the high cost of high quality IMU motivate the use of inexpensive vision based sensors for localization using visual odometry or visual SLAM (simultaneous localization and mapping) techniques. The first contribution of this thesis is a more general approach to bundle adjustment with an extended version of the projective coplanarity equation which enables us to make use of omnidirectional multi-camera systems which may consist of fisheye cameras that can capture a large field of view with one shot. We use ray directions as observations instead of image points which is why our approach does not rely on a specific projection model assuming a central projection. In addition, our approach allows the integration and estimation of points at infinity, which classical bundle adjustments are not capable of. We show that the integration of far or infinitely far points stabilizes the estimation of the rotation angles of the camera poses. In its second contribution, we employ this approach to bundle adjustment in a highly integrated system for incremental pose estimation and mapping on light-weight UAVs. Based on the image sequences of a multi-camera system our system makes use of tracked feature points to incrementally build a sparse map and incrementally refines this map using the iSAM2 algorithm. Our system is able to optionally integrate GPS information on the level of carrier phase observations even in underconstrained situations, e.g. if only two satellites are visible, for georeferenced pose estimation. This way, we are able to use all available information in underconstrained GPS situations to keep the mapped 3D model accurate and georeferenced. In its third contribution, we present an approach for re-using existing methods for dense stereo matching with fisheye cameras, which has the advantage that highly optimized existing methods can be applied as a black-box without modifications even with cameras that have field of view of more than 180 deg. We provide a detailed accuracy analysis of the obtained dense stereo results. The accuracy analysis shows the growing uncertainty of observed image points of fisheye cameras due to increasing blur towards the image border. Core of the contribution is a rigorous variance component estimation which allows to estimate the variance of the observed disparities at an image point as a function of the distance of that point to the principal point. We show that this improved stochastic model provides a more realistic prediction of the uncertainty of the triangulated 3D points.Autonom operierende UAVs benötigen eine schnelle Lokalisierung zur Navigation, zur Exploration unbekannter Umgebungen und zur Kartierung. Zur Posenbestimmung verwenden viele UAV-Systeme eine Kombination aus GPS-Empfängern und Inertial-Messeinheiten (IMU). Die Verfügbarkeit von GPS-Signalen ist jedoch nicht überall gewährleistet, insbesondere in der Nähe abschattender Objekte, und präzise IMUs sind für leichtgewichtige UAVs zu schwer. Auch die hohen Kosten qualitativ hochwertiger IMUs motivieren den Einsatz von kostengünstigen bildgebenden Sensoren zur Lokalisierung mittels visueller Odometrie oder SLAM-Techniken zur simultanen Lokalisierung und Kartierung. Im ersten wissenschaftlichen Beitrag dieser Arbeit entwickeln wir einen allgemeineren Ansatz für die Bündelausgleichung mit einem erweiterten Modell für die projektive Kollinearitätsgleichung, sodass auch omnidirektionale Multikamerasysteme verwendet werden können, welche beispielsweise bestehend aus Fisheyekameras mit einer Aufnahme einen großen Sichtbereich abdecken. Durch die Integration von Strahlrichtungen als Beobachtungen ist unser Ansatz nicht von einem kameraspezifischen Abbildungsmodell abhängig solange dieses der Zentralprojektion folgt. Zudem erlaubt unser Ansatz die Integration und Schätzung von unendlich fernen Punkten, was bei klassischen Bündelausgleichungen nicht möglich ist. Wir zeigen, dass durch die Integration weit entfernter und unendlich ferner Punkte die Schätzung der Rotationswinkel der Kameraposen stabilisiert werden kann. Im zweiten Beitrag verwenden wir diesen entwickelten Ansatz zur Bündelausgleichung für ein System zur inkrementellen Posenschätzung und dünnbesetzten Kartierung auf einem leichtgewichtigen UAV. Basierend auf den Bildsequenzen eines Mulitkamerasystems baut unser System mittels verfolgter markanter Bildpunkte inkrementell eine dünnbesetzte Karte auf und verfeinert diese inkrementell mittels des iSAM2-Algorithmus. Unser System ist in der Lage optional auch GPS Informationen auf dem Level von GPS-Trägerphasen zu integrieren, wodurch sogar in unterbestimmten Situation - beispielsweise bei nur zwei verfügbaren Satelliten - diese Informationen zur georeferenzierten Posenschätzung verwendet werden können. Im dritten Beitrag stellen wir einen Ansatz zur Verwendung existierender Methoden für dichtes Stereomatching mit Fisheyekameras vor, sodass hoch optimierte existierende Methoden als Black Box ohne Modifzierungen sogar mit Kameras mit einem Gesichtsfeld von mehr als 180 Grad verwendet werden können. Wir stellen eine detaillierte Genauigkeitsanalyse basierend auf dem Ergebnis des dichten Stereomatchings dar. Die Genauigkeitsanalyse zeigt, wie stark die Genauigkeit beobachteter Bildpunkte bei Fisheyekameras zum Bildrand aufgrund von zunehmender Unschärfe abnimmt. Das Kernstück dieses Beitrags ist eine Varianzkomponentenschätzung, welche die Schätzung der Varianz der beobachteten Disparitäten an einem Bildpunkt als Funktion von der Distanz dieses Punktes zum Hauptpunkt des Bildes ermöglicht. Wir zeigen, dass dieses verbesserte stochastische Modell eine realistischere Prädiktion der Genauigkeiten der 3D Punkte ermöglicht

    Inertial and 3D-odometry fusion in rough terrain Towards real 3D navigation

    Get PDF
    Many algorithms related to localization need good pose prediction in order to produce accurate results. This is especially the case for data association algorithms, where false feature matches can lead to the localization system failure. In rough terrain, the field of view can vary significantly between two feature extraction steps, so a good position prediction is necessary to robustly track features. This paper presents a method for combining dead reckoning sensor information in order to provide an initial estimate of the six degrees of freedom of a rough terrain rover. An inertial navigation system (INS) and the wheel encoders are used as sensory inputs. The sensor fusion scheme is based on an extended information filter (EIF) and is extensible to any kind and number of sensors. In order to test the system, the rover has been driven on different kind of obstacles while computing both pure 3D-odometric and fused INS/3D-odometry trajectories. The results show that the use of the INS significantly improves the pose prediction

    Improving the Angular Velocity Measured with a Low-Cost Magnetic Rotary Encoder Attached to a Brushed DC Motor by Compensating Magnet and Hall-Effect Sensor Misalignments

    Get PDF
    This paper proposes a method to improve the angular velocity measured by a low-cost magnetic rotary encoder attached to a brushed direct current (DC) motor. The low-cost magnetic rotary encoder used in brushed DC motors use to have a small magnetic ring attached to the rotational axis and one or more fixed Hall-effect sensors next to the magnet. Then, the Hall-effect sensors provide digital pulses with a duration and frequency proportional to the angular rotational velocity of the shaft of the encoder. The drawback of this mass produced rotary encoder is that any structural misalignment between the rotating magnetic field and the Hall-effect sensors produces asymmetric pulses that reduces the precision of the estimation of the angular velocity. The hypothesis of this paper is that the information provided by this low-cost magnetic rotary encoder can be processed and improved in order to obtain an accurate and precise estimation of the angular rotational velocity. The methodology proposed has been validated in four compact motorizations obtaining a reduction in the ripple of the estimation of the angular rotational velocity of: 4.93%, 59.43%, 76.49%, and 86.75%. This improvement has the advantage that it does not add time delays and does not increases the overall cost of the rotary encoder. These results showed the real dimension of this structural misalignment problem and the great improvement in precision that can be achieved.This research was funded by the Spanish Ministry of Science and Innovation, grant number PID2020-118874RB-I00

    Application of computer vision for roller operation management

    Get PDF
    Compaction is the last and possibly the most important phase in construction of asphalt concrete (AC) pavements. Compaction densifies the loose (AC) mat, producing a stable surface with low permeability. The process strongly affects the AC performance properties. Too much compaction may cause aggregate degradation and low air void content facilitating bleeding and rutting. On the other hand too little compaction may result in higher air void content facilitating oxidation and water permeability issues, rutting due to further densification by traffic and reduced fatigue life. Therefore, compaction is a critical issue in AC pavement construction.;The common practice for compacting a mat is to establish a roller pattern that determines the number of passes and coverages needed to achieve the desired density. Once the pattern is established, the roller\u27s operator must maintain the roller pattern uniformly over the entire mat.;Despite the importance of uniform compaction to achieve the expected durability and performance of AC pavements, having the roller operator as the only mean to manage the operation can involve human errors.;With the advancement of technology in recent years, the concept of intelligent compaction (IC) was developed to assist the roller operators and improve the construction quality. Commercial IC packages for construction rollers are available from different manufacturers. They can provide precise mapping of a roller\u27s location and provide the roller operator with feedback during the compaction process.;Although, the IC packages are able to track the roller passes with impressive results, there are also major hindrances. The high cost of acquisition and potential negative impact on productivity has inhibited implementation of IC.;This study applied computer vision technology to build a versatile and affordable system to count and map roller passes. An infrared camera is mounted on top of the roller to capture the operator view. Then, in a near real-time process, image features were extracted and tracked to estimate the incremental rotation and translation of the roller. Image featured are categorized into near and distant features based on the user defined horizon. The optical flow is estimated for near features located in the region below the horizon. The change in roller\u27s heading is constantly estimated from the distant features located in the sky region. Using the roller\u27s rotation angle, the incremental translation between two frames will be calculated from the optical flow. The roller\u27s incremental rotation and translation will put together to develop a tracking map.;During system development, it was noted that in environments with thermal uniformity, the background of the IR images exhibit less featured as compared to images captured with optical cameras which are insensitive to temperature. This issue is more significant overnight, since nature elements are not able to reflect the heat energy from sun. Therefore to improve roller\u27s heading estimation where less features are available in the sky region a unique methodology that allows heading detection based on the asphalt mat edges was developed for this research. The heading measurements based on the slope of the asphalt hot edges will be added to the pool of the headings measured from sky region. The median of all heading measurements will be used as the incremental roller\u27s rotation for the tracking analysis.;The record of tracking data is used for QC/QA purposes and verifying the proper implementation of the roller pattern throughout a job constructed under the roller pass specifications.;The system developed during this research was successful in mapping roller location for few projects tested. However the system should be independently validated

    Localization and Mapping from Shore Contours and Depth

    Get PDF
    This work examines the problem of solving SLAM in aquatic environments using an unmanned surface vessel under conditions that restrict global knowledge of the robot's pose. These conditions refer specifically to the absence of a global positioning system to estimate position, a poor vehicle motion model, and absence of magnetic field to estimate absolute heading. These conditions are present in terrestrial environments where GPS satellite reception is occluded by surrounding structures and magnetic inference affects compass measurements. Similar conditions are anticipated in extra-terrestrial environments such as on Titan which lacks the infrastructure necessary for traditional positioning sensors and the unstable magnetic core renders compasses useless. This work develops a solution to the SLAM problem that utilizes shore features coupled with information about the depth of the water column. The approach is validated experimentally using an autonomous surface vehicle utilizing omnidirectional video and SONAR, results are compared to GPS ground truth

    Agent and object aware tracking and mapping methods for mobile manipulators

    Get PDF
    The age of the intelligent machine is upon us. They exist in our factories, our warehouses, our military, our hospitals, on our roads, and on the moon. Most of these things we call robots. When placed in a controlled or known environment such as an automotive factory or a distribution warehouse they perform their given roles with exceptional efficiency, achieving far more than is within reach of a humble human being. Despite the remarkable success of intelligent machines in such domains, they have yet to make a full-hearted deployment into our homes. The missing link between the robots we have now and the robots that are soon to come to our houses is perception. Perception as we mean it here refers to a level of understanding beyond the collection and aggregation of sensory data. Much of the available sensory information is noisy and unreliable, our homes contain many reflective surfaces, repeating textures on large flat surfaces, and many disruptive moving elements, including humans. These environments change over time, with objects frequently moving within and between rooms. This idea of change in an environment is fundamental to robotic applications, as in most cases we expect them to be effectors of such change. We can identify two particular challenges1 that must be solved for robots to make the jump to less structured environments - how to manage noise and disruptive elements in observational data, and how to understand the world as a set of changeable elements (objects) which move over time within a wider environment. In this thesis we look at one possible approach to solving each of these problems. For the first challenge we use proprioception aboard a robot with an articulated arm to handle difficult and unreliable visual data caused both by the robot and the environment. We use sensor data aboard the robot to improve the pose tracking of a visual system when the robot moves rapidly, with high jerk, or when observing a scene with little visual variation. For the second challenge, we build a model of the world on the level of rigid objects, and relocalise them both as they change location between different sequences and as they move. We use semantics, image keypoints, and 3D geometry to register and align objects between sequences, showing how their position has moved between disparate observations.Open Acces

    3D Position Tracking in Challenging Terrain

    Get PDF
    The intent of this paper is to show how the accuracy of 3D position tracking can be improved by considering rover locomotion in rough terrain as a holistic problem. An appropriate locomotion concept endowed with a controller min- imizing slip improves the climbing performance, the accuracy of odometry and the signal/noise ratio of the onboard sensors. Sensor fusion involving an inertial mea- surement unit, 3D-Odometry, and visual motion estimation is presented. The exper- imental results show clearly how each sensor contributes to increase the accuracy of the 3D pose estimation in rough terrain

    3D position tracking for all-terrain robots

    Get PDF
    Rough terrain robotics is a fast evolving field of research and a lot of effort is deployed towards enabling a greater level of autonomy for outdoor vehicles. Such robots find their application in scientific exploration of hostile environments like deserts, volcanoes, in the Antarctic or on other planets. They are also of high interest for search and rescue operations after natural or artificial disasters. The challenges to bring autonomy to all terrain rovers are wide. In particular, it requires the development of systems capable of reliably navigate with only partial information of the environment, with limited perception and locomotion capabilities. Amongst all the required functionalities, locomotion and position tracking are among the most critical. Indeed, the robot is not able to fulfill its task if an inappropriate locomotion concept and control is used, and global path planning fails if the rover loses track of its position. This thesis addresses both aspects, a) efficient locomotion and b) position tracking in rough terrain. The Autonomous System Lab developed an off-road rover (Shrimp) showing excellent climbing capabilities and surpassing most of the existing similar designs. Such an exceptional climbing performance enables an extension in the range of possible areas a robot could explore. In order to further improve the climbing capabilities and the locomotion efficiency, a control method minimizing wheel slip has been developed in this thesis. Unlike other control strategies, the proposed method does not require the use of soil models. Independence from these models is very significant because the ability to operate on different types of soils is the main requirement for exploration missions. Moreover, our approach can be adapted to any kind of wheeled rover and the processing power needed remains relatively low, which makes online computation feasible. In rough terrain, the problem of tracking the robot's position is tedious because of the excessive variation of the ground. Further, the field of view can vary significantly between two data acquisition cycles. In this thesis, a method for probabilistically combining different types of sensors to produce a robust motion estimation for an all-terrain rover is presented. The proposed sensor fusion scheme is flexible in that it can easily accommodate any number of sensors, of any kind. In order to test the algorithm, we have chosen to use the following sensory inputs for the experiments: 3D-Odometry, inertial measurement unit (accelerometers, gyros) and visual odometry. The 3D-Odometry has been specially developed in the framework of this research. Because it accounts for ground slope discontinuities and the rover kinematics, this technique results in a reasonably precise 3D motion estimate in rough terrain. The experiments provided excellent results and proved that the use of complementary sensors increases the robustness and accuracy of the pose estimate. In particular, this work distinguishes itself from other similar research projects in the following ways: the sensor fusion is performed with more than two sensor types and sensor fusion is applied a) in rough terrain and b) to track the real 3D pose of the rover. Another result of this work is the design of a high-performance platform for conducting further research. In particular, the rover is equipped with two computers, a stereovision module, an omnidirectional vision system, an inertial measurement unit, numerous sensors and actuators and electronics for power management. Further, a set of powerful tools has been developed to speed up the process of debugging algorithms and analyzing data stored during the experiments. Finally, the modularity and portability of the system enables easy adaptation of new actuators and sensors. All these characteristics speed up the research in this field
    • …
    corecore