52 research outputs found

    1-Point-RANSAC Structure from Motion for Vehicle-Mounted Cameras by Exploiting Non-holonomic Constraints

    Get PDF
    This paper presents a new method to estimate the relative motion of a vehicle from images of a single camera. The computational cost of the algorithm is limited only by the feature extraction and matching process, as the outlier removal and the motion estimation steps take less than a fraction of millisecond with a normal laptop computer. The biggest problem in visual motion estimation is data association; matched points contain many outliers that must be detected and removed for the motion to be accurately estimated. In the last few years, a very established method for removing outliers has been the "5-point RANSAC” algorithm which needs a minimum of 5 point correspondences to estimate the model hypotheses. Because of this, however, it can require up to several hundreds of iterations to find a set of points free of outliers. In this paper, we show that by exploiting the nonholonomic constraints of wheeled vehicles it is possible to use a restrictive motion model which allows us to parameterize the motion with only 1 point correspondence. Using a single feature correspondence for motion estimation is the lowest model parameterization possible and results in the two most efficient algorithms for removing outliers: 1-point RANSAC and histogram voting. To support our method we run many experiments on both synthetic and real data and compare the performance with a state-of-the-art approach. Finally, we show an application of our method to visual odometry by recovering a 3Km trajectory in a cluttered urban environment and in real-tim

    GPGM-SLAM: a Robust SLAM System for Unstructured Planetary Environments with Gaussian Process Gradient Maps

    Get PDF
    Simultaneous Localization and Mapping (SLAM) techniques play a key role towards long-term autonomy of mobile robots due to the ability to correct localization errors and produce consistent maps of an environment over time. Contrarily to urban or man-made environments, where the presence of unique objects and structures offer unique cues for localization, the apperance of unstructured natural environments is often ambiguous and self-similar, hindering the performances of loop closure detection. In this paper, we present an approach to improve the robustness of place recognition in the context of a submap-based stereo SLAM based on Gaussian Process Gradient Maps (GPGMaps). GPGMaps embed a continuous representation of the gradients of the local terrain elevation by means of Gaussian Process regression and Structured Kernel Interpolation, given solely noisy elevation measurements. We leverage the imagelike structure of GPGMaps to detect loop closures using traditional visual features and Bag of Words. GPGMap matching is performed as an SE(2) alignment to establish loop closure constraints within a pose graph. We evaluate the proposed pipeline on a variety of datasets recorded on Mt. Etna, Sicily and in the Morocco desert, respectively Moon- and Mars-like environments, and we compare the localization performances with state-of-the-art approaches for visual SLAM and visual loop closure detection

    AI Applications on Planetary Rovers

    Get PDF
    The rise in the number of robotic missions to space is paving the way for the use of artificial intelligence and machine learning in the autonomy and augmentation of rover operations. For one, more rovers mean more images, and more images mean more data bandwidth required for downlinking as well as more mental bandwidth for analyzing the images. On the other hand, light-weight, low-powered microrover platforms are being developed to accommodate the drive for planetary exploration. As a result of the mass and power constraints, these microrover platforms will not carry typical navigational instruments like a stereocamera or a laser rangerfinder, relying instead on a single, monocular camera. The first project in this thesis explores the realm of novelty detection where the goal is to find `new\u27 and `interesting\u27 features such that instead of sending a whole set of images, the algorithm could simply flag any image that contains novel features to prioritize its downlink. This form of data triage allows the science team to redirect its attention to objects that could be of high science value. For this project, a combination of a Convolutional Neural Network (CNN) with a K-means algorithm as a tool for novelty detection is introduced. By leveraging the powerful feature extraction capabilities of a CNN, typical images could be tightly clustered into the number of expected entities within the rover\u27s environment. The distance between the extracted feature vector and the closest cluster centroid is then defined to be its novelty score. As such, a novel image will have a significantly higher distance to the cluster centroids compared to the typical images. This algorithm was trained on images obtained from the Canadian Space Agency\u27s Analogue Terrain Facility and was shown to be effective in capturing the majority of the novel images within the dataset. The second project in this thesis aims to augment microrover platforms that are lacking the instruments for distance measurements. Particularly, this project explores the application of monocular depth estimation where the goal is to estimate a depth map from a monocular image. This problem is inherently difficult to solve given that recovering depth from a 2D image is a mathematically ill-posed problem, compounded by the fact that the lunar environment is a dull, colourless landscape. To solve his problem, a dataset of images and their corresponding ground truth depth maps have been taken at Mission Control Space Service\u27s Indoor Analogue Terrain. An autoencoder was then trained to take in the image and output an estimated depth map. The results of this project show that the model is not reliable at gauging the distances of slopes and objects near the horizon. However, the generated depth maps are reliable in the short to mid range, where the distances are most relevant for remote rover operations

    Planetary rovers and data fusion

    Get PDF
    This research will investigate the problem of position estimation for planetary rovers. Diverse algorithmic filters are available for collecting input data and transforming that data to useful information for the purpose of position estimation process. The terrain has sandy soil which might cause slipping of the robot, and small stones and pebbles which can affect trajectory. The Kalman Filter, a state estimation algorithm was used for fusing the sensor data to improve the position measurement of the rover. For the rover application the locomotion and errors accumulated by the rover is compensated by the Kalman Filter. The movement of a rover in a rough terrain is challenging especially with limited sensors to tackle the problem. Thus, an initiative was taken to test drive the rover during the field trial and expose the mobile platform to hard ground and soft ground(sand). It was found that the LSV system produced speckle image and values which proved invaluable for further research and for the implementation of data fusion. During the field trial,It was also discovered that in a at hard surface the problem of the steering rover is minimal. However, when the rover was under the influence of soft sand the rover tended to drift away and struggled to navigate. This research introduced the laser speckle velocimetry as an alternative for odometric measurement. LSV data was gathered during the field trial to further simulate under MATLAB, which is a computational/mathematical programming software used for the simulation of the rover trajectory. The wheel encoders came with associated errors during the position measurement process. This was observed during the earlier field trials too. It was also discovered that the Laser Speckle Velocimetry measurement was able to measure accurately the position measurement but at the same time sensitivity of the optics produced noise which needed to be addressed as error problem. Though the rough terrain is found in Mars, this paper is applicable to a terrestrial robot on Earth. There are regions in Earth which have rough terrains and regions which are hard to measure with encoders. This is especially true concerning icy places like Antarctica, Greenland and others. The proposed implementation for the development of the locomotion system is to model a system for the position estimation through the use of simulation and collecting data using the LSV. Two simulations are performed, one is the differential drive of a two wheel robot and the second involves the fusion of the differential drive robot data and the LSV data collected from the rover testbed. The results have been positive. The expected contributions from the research work includes a design of a LSV system to aid the locomotion measurement system. Simulation results show the effect of different sensors and velocity of the robot. The kalman filter improves the position estimation process

    Visual Odometry and Traversability Analysis for Wheeled Robots in Complex Environments

    Get PDF
    Durch die technische Entwicklung im Bereich der radbasierten mobilen Roboter (WMRs) erweitern sich deren Anwendungsszenarien. Neben den eher strukturierten industriellen und hĂ€uslichen Umgebungen sind nun komplexere stĂ€dtische Szenarien oder Außenbereiche mögliche Einsatzgebiete. Einer dieser neuen AnwendungsfĂ€lle wird in dieser Arbeit beschrieben: ein intelligenter persönlicher MobilitĂ€tsassistent, basierend auf einem elektrischen Rollator. Ein solches System hat mehrere Anforderungen: Es muss sicher, robust, leicht und preiswert sein und sollte in der Lage sein, in Echtzeit zu navigieren, um eine direkte physische Interaktion mit dem Benutzer zu ermöglichen. Da diese Eigenschaften fĂŒr fast alle Arten von WMRs wĂŒnschenswert sind, können alle in dieser Arbeit prĂ€sentierten Methoden auch mit anderen Typen von WMRs verwendet werden. Zuerst wird eine visuelle Odometriemethode vorgestellt, welche auf die Arbeit mit einer nach unten gerichteten RGB-D-Kamera ausgelegt ist. Hierzu wird die Umgebung auf die Bodenebene projiziert, um eine 2-dimensionale ReprĂ€sentation zu erhalten. Nun wird ein effizientes Bildausrichtungsverfahren verwendet, um die Fahrzeugbewegung aus aufeinander folgenden Bildern zu schĂ€tzen. Da das Verfahren fĂŒr den Einsatz auf einem WMR ausgelegt ist, können weitere Annahmen verwendet werden, um die Genauigkeit der visuellen Odometrie zu verbessern. FĂŒr einen nicht-holonomischen WMR mit einem bekannten Fahrzeugmodell, entweder Differentialantrieb, Skid-Lenkung oder Ackermann-Lenkung, können die Bewegungsparameter direkt aus den Bilddaten geschĂ€tzt werden. Dies verbessert die Genauigkeit und Robustheit des Verfahrens erheblich. ZusĂ€tzlich wird eine Ausreißererkennung vorgestellt, die im Modellraum, d.h. den Bewegungsparametern des kinematischen Models, arbeitet. Üblicherweise wird die Ausreißererkennung im Datenraum, d.h. auf den Bildpunkten, durchgefĂŒhrt. Mittels der Projektion der Umgebung auf die Bodenebene kann auch eine Höhenkarte der Umgebung erstellt werde. Es wird untersucht, ob diese Karte, in Verbindung mit einem detaillierten Fahrzeugmodell, zur AbschĂ€tzung zukĂŒnftiger Fahrzeugposen verwendet werden kann. Durch die Verwendung einer gemeinsamen bildbasierten Darstellung der Umgebung und des Fahrzeugs wird eine sehr effiziente und dennoch sehr genaue PosenschĂ€tzmethode vorgeschlagen. Da die Befahrbarkeit eines Bereichs durch die Fahrzeugposen und mögliche Kollisionen bestimmt werden kann, wird diese Methode fĂŒr eine neue echtzeitfĂ€hige Pfadplanung verwendet. Aus der Fahrzeugpose werden verschiedene Sicherheitskriterien bestimmt, die als Heuristik fĂŒr einen A*-Ă€hnlichen Planer verwendet werden. Hierzu werden mithilfe des kinematischen Models mögliche zukĂŒnftige Fahrzeugposen ermittelt und fĂŒr jede dieser Posen ein Befahrbarkeitswert berechnet.Das endgĂŒltige System ermöglicht eine sichere und robuste Echtzeit-Navigation auch in schwierigen Innen- und Außenumgebungen.The application of wheeled mobile robots (WMRs) is currently expanding from rather controlled industrial or domestic scenarios into more complex urban or outdoor environments, allowing a variety of new use cases. One of these new use cases is described in this thesis: An intelligent personal mobility assistant, based on an electrical rollator. Such a system comes with several requirements: It must be safe and robust, lightweight, inexpensive and should be able to navigate in real-time in order to allow direct physical interaction with the user. As these properties are desirable for most WMRs, all methods proposed in this thesis can also be used with other WMR platforms.First, a visual odometry method is presented, which is tailored to work with a downward facing RGB-D camera. It projects the environment onto a ground plane image and uses an efficient image alignment method to estimate the vehicle motion from consecutive images. As the method is designed for use on a WMR, further constraints can be employed to improve the accuracy of the visual odometry. For a non-holonomic WMR with a known vehicle model, either differential drive, skid steering or Ackermann, the motion parameters of the corresponding kinematic model, instead of the generic motion parameters, can be estimated directly from the image data. This significantly improves the accuracyand robustness of the method. Additionally, an outlier rejection scheme is presented that operates in model space, i.e. the motion parameters of the kinematic model, instead of data space, i.e. image pixels. Furthermore, the projection of the environment onto the ground plane can also be used to create an elevation map of the environment. It is investigated if this map, in conjunction with a detailed vehicle model, can be used to estimate future vehicle poses. By using a common image-based representation of the environment and the vehicle, a very efficient and still highly accurate pose estimation method is proposed. Since the traversability of an area can be determined by the vehicle poses and potential collisions, the pose estimation method is employed to create a novel real-time path planning method. The detailed vehicle model is extended to also represent the vehicle’s chassis for collision detection. Guided by an A*-like planner, a search graph is constructed by propagating the vehicle using its kinematic model to possible future poses and calculating a traversability score for each of these poses. The final system performs safe and robust real-time navigation even in challenging indoor and outdoor environments

    Single and multiple stereo view navigation for planetary rovers

    Get PDF
    © Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment

    A unified vision and inertial navigation system for planetary hoppers

    Get PDF
    Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (pages 139-146).In recent years, considerable attention has been paid to hopping as a novel mode of planetary exploration. Hopping vehicles provide advantages over traditional surface exploration vehicles, such as wheeled rovers, by enabling in-situ measurements in otherwise inaccessible terrain. However, significant development over previously demonstrated vehicle navigation technologies is required to overcome the inherent challenges involved in navigating a hopping vehicle, especially in adverse terrain. While hoppers are in many ways similar to traditional landers and surface explorers, they incorporate additional, unique motions that must be accounted for beyond those of conventional planetary landing and surface navigation systems. This thesis describes a unified vision and inertial navigation system for propulsive planetary hoppers and provides demonstration of this technology. An architecture for a navigation system specific to the motions and mission profiles of hoppers is presented, incorporating unified inertial and terrain-relative navigation solutions. A modular sensor testbed, including a stereo vision package and inertial measurement unit, was developed to act as a proof-of-concept for this navigation system architecture. The system is shown to be capable of real-time output of an accurate navigation state estimate for motions and trajectories similar to those of planetary hoppers.by Theodore J. Steiner, III.S.M

    Monocular Visual Odometry for Fixed-Wing Small Unmanned Aircraft Systems

    Get PDF
    The popularity of small unmanned aircraft systems (SUAS) has exploded in recent years and seen increasing use in both commercial and military sectors. A key interest area for the military is to develop autonomous capabilities for these systems, of which navigation is a fundamental problem. Current navigation solutions suffer from a heavy reliance on a Global Positioning System (GPS). This dependency presents a significant limitation for military applications since many operations are conducted in environments where GPS signals are degraded or actively denied. Therefore, alternative navigation solutions without GPS must be developed and visual methods are one of the most promising approaches. A current visual navigation limitation is that much of the research has focused on developing and applying these algorithms on ground-based vehicles, small hand-held devices or multi-rotor SUAS. However, the Air Force has a need for fixed-wing SUAS to conduct extended operations. This research evaluates current state-of-the-art, open-source monocular visual odometry (VO) algorithms applied on fixed-wing SUAS flying at high altitudes under fast translation and rotation speeds. The algorithms tested are Semi-Direct VO (SVO), Direct Sparse Odometry (DSO), and ORB-SLAM2 (with loop closures disabled). Each algorithm is evaluated on a fixed-wing SUAS in simulation and real-world flight tests over Camp Atterbury, Indiana. Through these tests, ORB-SLAM2 is found to be the most robust and flexible algorithm under a variety of test conditions. However, all algorithms experience great difficulty maintaining localization in the collected real-world datasets, showing the limitations of using visual methods as the sole solution. Further study and development is required to fuse VO products with additional measurements to form a complete autonomous navigation solution

    Use of Unmanned Aerial Systems in Civil Applications

    Get PDF
    Interest in drones has been exponentially growing in the last ten years and these machines are often presented as the optimal solution in a huge number of civil applications (monitoring, agriculture, emergency management etc). However the promises still do not match the data coming from the consumer market, suggesting that the only big field in which the use of small unmanned aerial vehicles is actually profitable is the video-makers’ one. This may be explained partly with the strong limits imposed by existing (and often "obsolete") national regulations, but also - and pheraps mainly - with the lack of real autonomy. The vast majority of vehicles on the market nowadays are infact autonomous only in the sense that they are able to follow a pre-determined list of latitude-longitude-altitude coordinates. The aim of this thesis is to demonstrate that complete autonomy for UAVs can be achieved only with a performing control, reliable and flexible planning platforms and strong perception capabilities; these topics are introduced and discussed by presenting the results of the main research activities performed by the candidate in the last three years which have resulted in 1) the design, integration and control of a test bed for validating and benchmarking visual-based algorithm for space applications; 2) the implementation of a cloud-based platform for multi-agent mission planning; 3) the on-board use of a multi-sensor fusion framework based on an Extended Kalman Filter architecture
    • 

    corecore