48 research outputs found

    Two-Stage Road Terrain Identification Approach for Land Vehicles Using Feature-Based and Markov Random Field Algorithm

    Full text link
    © 2001-2011 IEEE. Road terrain identification is one of the important tasks for driving assistant systems or autonomous land vehicles. It plays a key role in improving driving strategy and enhancing fuel efficiency. In this paper, a two-stage approach using multiple sensors is presented. In the first stage, a feature-based identification approach is performed using an accelerometer, a camera, and downward-looking and forward-looking laser range finders (LRFs). This produces four classification label sequences. In the second stage, a majority vote is implemented for each label sequences to match them into synchronized road patches. Then a Markov Random Field (MRF) model is designed to generate the final optimized identification results to improve the forward-looking LRF. This approach enables the vehicle to observe the upcoming road terrain before moving onto it by fusing all the classification results using an MRF algorithm. The experiments show this approach improved the terrain identification accuracy and robustness significantly for some familiar road terrains

    Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments

    Get PDF
    Perception of the surrounding environment is an essential tool for intelligent navigation in any autonomous vehicle. In the context of Mars exploration, there is a strong motivation to enhance the perception of the rovers beyond geometry-based obstacle avoidance, so as to be able to predict potential interactions with the terrain. In this thesis we propose to remotely predict the amount of slip, which reflects the mobility of the vehicle on future terrain. The method is based on learning from experience and uses visual information from stereo imagery as input. We test the algorithm on several robot platforms and in different terrains. We also demonstrate its usefulness in an integrated system, onboard a Mars prototype rover in the JPL Mars Yard. Another desirable capability for an autonomous robot is to be able to learn about its interactions with the environment in a fully automatic fashion. We propose an algorithm which uses the robot's sensors as supervision for vision-based learning of different terrain types. This algorithm can work with noisy and ambiguous signals provided from onboard sensors. To be able to cope with rich, high-dimensional visual representations we propose a novel, nonlinear dimensionality reduction technique which exploits automatic supervision. The method is the first to consider supervised nonlinear dimensionality reduction in a probabilistic framework using supervision which can be noisy or ambiguous. Finally, we consider the problem of learning to recognize different terrains, which addresses the time constraints of an onboard autonomous system. We propose a method which automatically learns a variable-length feature representation depending on the complexity of the classification task. The proposed approach achieves a good trade-off between decrease in computational time and recognition performance.</p

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Path Planning for incline terrain using Embodied Artificial Intelligence

    Get PDF
    Η Ενσώματη Τεχνητή Νοημοσύνη στοχεύει στο να καλύψει την ανάγκη για την αναπαράσταση ενός προβλήματος αναζήτησης, καθώς και την αναπαράσταση του τι συνιστά “καλή” λύση για το πρόβλημα αυτό σε μια έξυπνη μηχανή. Στην περίπτωση της παρούσας πτυχιακής, αυτή η έξυπνη μηχανή είναι ένα ρομπότ. Συνδυάζοντας την Τεχνητή Νοημοσύνη και την Ρομποτική μπορούμε να ορίσουμε πειράματα των οποίων ο χώρος αναζήτησης είναι ο φυσικός κόσμος και τα αποτελέσματα κάθε πράξης συνιστούν την αξιολόγηση της κάθε λύσης. Στο πλαίσιο της πτυχιακής μου είχα την ευκαιρία να πειραματιστώ με την ανάπτυξη αλγορίθμων Τεχνητής Νοημοσύνης οι οποίοι καθοδηγούν ένα μη επανδρωμένο όχημα εδάφους στην ανακάλυψη μιας λύσης ενός δύσκολου προβλήματος πλοήγησης σε εξωτερικό χώρο, όπως η διάσχιση ενός εδάφους με απότομη κλίση. Επιχείρησα να αντιμετωπίσω το πρόβλημα αυτό με τρεις διαφορετικές προσεγγίσεις, μία με αλγόριθμο Hill Climbing, μία με N-best αναζήτηση και μία με Εξελικτικό Αλγόριθμο, καθεμία με τα δικά της προτερήματα και τις δικές της αδυναμίες. Τελικά, δημιούργησα και αξιολόγησα επίδειξεις, τόσο σε προσομοιωμένα σενάρια όσο και σε ένα σενάριο στον πραγματικό κόσμο. Τα αποτελέσματα αυτών των επιδείξεων δείχνουν μία σαφή πρόοδο στην προσέγγιση του προαναφερθέντος προβλήματος από μία ρομποτική πλαρφόρμα.Embodied Artificial Intelligence aims to cover the need of a search problem’s representation, as well as the representation of what constitutes a “good” solution to this problem in a smart machine. In this thesis’ case, this smart machine is a robot. When we combine Artificial Intelligence and Robotics we can define experiments where the search space is the physical world and the results of each action constitute each solution’s evaluation. In my thesis’ context, I had the opportunity to experiment with the development of artificial intelligence algorithms that guide an unmanned ground vehicle to discover the solution of a tough outdoor navigation problem, like traversing a terrain region of steep incline. I attempted to face the problem with three different approaches. A Hill Climbing algorithm approach, a N-best search approach and an Evolutionary Algorithm approach, each one with its own strengths and weaknesses. In the end, I created and I evaluated demonstrations, both in simulated scenarios and in a real world scenario. The results of these demonstrations show a clear progress in the approach of the aforementioned problem, by the robotic platform

    Learning to Predict Slip for Ground Robots

    Get PDF
    In this paper we predict the amount of slip an exploration rover would experience using stereo imagery by learning from previous examples of traversing similar terrain. To do that, the information of terrain appearance and geometry regarding some location is correlated to the slip measured by the rover while this location is being traversed. This relationship is learned from previous experience, so slip can be predicted later at a distance from visual information only. The advantages of the approach are: 1) learning from examples allows the system to adapt to unknown terrains rather than using fixed heuristics or predefined rules; 2) the feedback about the observed slip is received from the vehicle's own sensors which can fully automate the process; 3) learning slip from previous experience can replace complex mechanical modeling of vehicle or terrain, which is time consuming and not necessarily feasible. Predicting slip is motivated by the need to assess the risk of getting trapped before entering a particular terrain. For example, a planning algorithm can utilize slip information by taking into consideration that a slippery terrain is costly or hazardous to traverse. A generic nonlinear regression framework is proposed in which the terrain type is determined from appearance and then a nonlinear model of slip is learned for a particular terrain type. In this paper we focus only on the latter problem and provide slip learning and prediction results for terrain types, such as soil, sand, gravel, and asphalt. The slip prediction error achieved is about 15% which is comparable to the measurement errors for slip itself

    Vision based environment perception system for next generation off-road ADAS : innovation report

    Get PDF
    Advanced Driver Assistance Systems (ADAS) aids the driver by providing information or automating the driving related tasks to improve driver comfort, reduce workload and improve safety. The vehicle senses its external environment using sensors, building a representation of the world used by the control systems. In on-road applications, the perception focuses on establishing the location of other road participants such as vehicles and pedestrians and identifying the road trajectory. Perception in the off-road environment is more complex, as the structure found in urban environments is absent. Off-road perception deals with the estimation of surface topography and surface type, which are the factors that will affect vehicle behaviour in unstructured environments. Off-road perception has seldom been explored in automotive context. For autonomous off-road driving, the perception solutions are primarily related to robotics and not directly applicable in the ADAS domain due to the different goals of unmanned autonomous systems, their complexity and the cost of employed sensors. Such applications consider only the impact of the terrain on the vehicle safety and progress but do not account for the driver comfort and assistance. This work addresses the problem of processing vision sensor data to extract the required information about the terrain. The main focus of this work is on the perception task with the constraints of automotive sensors and the requirements of the ADAS systems. By providing a semantic representation of the off-road environment including terrain attributes such as terrain type, description of the terrain topography and surface roughness, the perception system can cater for the requirements of the next generation of off-road ADAS proposed by Land Rover. Firstly, a novel and computationally efficient terrain recognition method was developed. The method facilitates recognition of low friction grass surfaces in real-time with high accuracy, by applying machine learning Support Vector Machine with illumination invariant normalised RGB colour descriptors. The proposed method was analysed and its performance was evaluated experimentally in off-road environments. Terrain recognition performance was evaluated on a variety of different surface types including grass, gravel and tarmac, showing high grass detection performance with accuracy of 97%. Secondly, a terrain geometry identification method was proposed which facilitates semantic representation of the terrain in terms of macro terrain features such as slopes, crest and ditches. The terrain geometry identification method processes 3D information reconstructed from stereo imagery and constructs a compact grid representation of the surface topography. This representation is further processed to extract object representation of slopes, ditches and crests. Thirdly, a novel method for surface roughness identification was proposed. The surface roughness descriptor is then further used to recommend a vehicle velocity, which will maintain passenger comfort. Surface roughness is described by the Power Spectral Density of the surface profile which correlates with the acceleration experienced by the vehicle. The surface roughness descriptor is then mapped onto vehicle speed recommendation so that the speed of the vehicle can be adapted in anticipation of the surface roughness. Terrain geometry and surface roughness identification performance were evaluated on a range of off-road courses with varying topology showing the capability of the system to correctly identify terrain features up to 20 m ahead of the vehicle and analyse surface roughness up to 15 m ahead of the vehicle. The speed was recommended correctly within +/- 5 kph. Further, the impact of the perception system on the speed adaptation was evaluated, showing the improvements in speed adaptation allowing for greater passenger comfort. The developed perception components facilitated the development of new off-road ADAS systems and were successfully applied in prototype vehicles. The proposed off-road ADAS are planned to be introduced in future generations of Land Rover products. The benefits of this research also included new Intellectual Property generated for Jaguar Land Rover. In the wider context, the enhanced off-road perception capability may facilitate further development of off-road automated driving and off-road autonomy within the constraints of the automotive platfor

    An Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data

    Get PDF
    In this thesis, we introduce a novel architecture called Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data (iARTEC ) . The proposed architecture integrates different terrain characterization and classification with other robotic system components. Within iARTEC , we consider the problem of having a legged robot autonomously learn to identify different terrains. Robust terrain identification can be used to enhance the capabilities of legged robot systems, both in terms of locomotion and navigation. For example, a robot that has learned to differentiate sand from gravel can autonomously modify (or even select a different) path in favor of traversing over a better terrain. The same knowledge of the terrain type can also be used to guide a robot in order to avoid specific terrains. To tackle this problem, we developed four approaches for terrain characterization, classification, path planning, and control for a mobile legged robot. We developed a particle system inspired approach to estimate the robot footâ ground contact interaction forces. The approach is derived from the well known Bekkerâ s theory to estimate the contact forces based on its point contact model concepts. It is realistically model real-time 3-dimensional contact behaviors between rigid body objects and the soil. For a real-time capable implementation of this approach, its reformulated to use a lookup table generated from simple contact experiments of the robot foot with the terrain. Also, we introduced a short-range terrain classifier using the robot embodied data. The classifier is based on a supervised machine learning approach to optimize the classifier parameters and terrain it using proprioceptive sensor measurements. The learning framework preprocesses sensor data through channel reduction and filtering such that the classifier is trained on the feature vectors that are closely associated with terrain class. For the long-range terrain type prediction using the robot exteroceptive data, we present an online visual terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs). In addition, we described a terrain dependent navigation and path planning approach that is based on E* planer and employs a proposed metric that specifies the navigation costs associated terrain types. This generated path naturally avoids obstacles and favors terrains with lower values of the metric. At the low level, a proportional input-scaling controller is designed and implemented to autonomously steer the robot to follow the desired path in a stable manner. iARTEC performance was tested and validated experimentally using several different sensing modalities (proprioceptive and exteroceptive) and on the six legged robotic platform CREX. The results show that the proposed architecture integrating the aforementioned approaches with the robotic system allowed the robot to learn both robot-terrain interaction and remote terrain perception models, as well as the relations linking those models. This learning mechanism is performed according to the robot own embodied data. Based on the knowledge available, the approach makes use of the detected remote terrain classes to predict the most probable navigation behavior. With the assigned metric, the performance of the robot on a given terrain is predicted. This allows the navigation of the robot to be influenced by the learned models. Finally, we believe that iARTEC and the methods proposed in this thesis can likely also be implemented on other robot types (such as wheeled robots), although we did not test this option in our work

    3D position tracking for all-terrain robots

    Get PDF
    Rough terrain robotics is a fast evolving field of research and a lot of effort is deployed towards enabling a greater level of autonomy for outdoor vehicles. Such robots find their application in scientific exploration of hostile environments like deserts, volcanoes, in the Antarctic or on other planets. They are also of high interest for search and rescue operations after natural or artificial disasters. The challenges to bring autonomy to all terrain rovers are wide. In particular, it requires the development of systems capable of reliably navigate with only partial information of the environment, with limited perception and locomotion capabilities. Amongst all the required functionalities, locomotion and position tracking are among the most critical. Indeed, the robot is not able to fulfill its task if an inappropriate locomotion concept and control is used, and global path planning fails if the rover loses track of its position. This thesis addresses both aspects, a) efficient locomotion and b) position tracking in rough terrain. The Autonomous System Lab developed an off-road rover (Shrimp) showing excellent climbing capabilities and surpassing most of the existing similar designs. Such an exceptional climbing performance enables an extension in the range of possible areas a robot could explore. In order to further improve the climbing capabilities and the locomotion efficiency, a control method minimizing wheel slip has been developed in this thesis. Unlike other control strategies, the proposed method does not require the use of soil models. Independence from these models is very significant because the ability to operate on different types of soils is the main requirement for exploration missions. Moreover, our approach can be adapted to any kind of wheeled rover and the processing power needed remains relatively low, which makes online computation feasible. In rough terrain, the problem of tracking the robot's position is tedious because of the excessive variation of the ground. Further, the field of view can vary significantly between two data acquisition cycles. In this thesis, a method for probabilistically combining different types of sensors to produce a robust motion estimation for an all-terrain rover is presented. The proposed sensor fusion scheme is flexible in that it can easily accommodate any number of sensors, of any kind. In order to test the algorithm, we have chosen to use the following sensory inputs for the experiments: 3D-Odometry, inertial measurement unit (accelerometers, gyros) and visual odometry. The 3D-Odometry has been specially developed in the framework of this research. Because it accounts for ground slope discontinuities and the rover kinematics, this technique results in a reasonably precise 3D motion estimate in rough terrain. The experiments provided excellent results and proved that the use of complementary sensors increases the robustness and accuracy of the pose estimate. In particular, this work distinguishes itself from other similar research projects in the following ways: the sensor fusion is performed with more than two sensor types and sensor fusion is applied a) in rough terrain and b) to track the real 3D pose of the rover. Another result of this work is the design of a high-performance platform for conducting further research. In particular, the rover is equipped with two computers, a stereovision module, an omnidirectional vision system, an inertial measurement unit, numerous sensors and actuators and electronics for power management. Further, a set of powerful tools has been developed to speed up the process of debugging algorithms and analyzing data stored during the experiments. Finally, the modularity and portability of the system enables easy adaptation of new actuators and sensors. All these characteristics speed up the research in this field

    Improving perception and locomotion capabilities of mobile robots in urban search and rescue missions

    Get PDF
    Nasazení mobilních robotů během zásahů záchranných složek je způsob, jak učinit práci záchranářů bezpečnější a efektivnější. Na roboty jsou ale při takovém použití kladeny vyšší nároky kvůli podmínkám, které při těchto událostech panují. Roboty se musejí pohybovat po nestabilních površích, ve stísněných prostorech nebo v kouři a prachu, což ztěžuje použití některých senzorů. Lokalizace, v robotice běžná úloha spočívající v určení polohy robotu vůči danému souřadnému systému, musí spolehlivě fungovat i za těchto ztížených podmínek. V této dizertační práci popisujeme vývoj lokalizačního systému pásového mobilního robotu, který je určen pro nasazení v případě zemětřesení nebo průmyslové havárie. Nejprve je předveden lokalizační systém, který vychází pouze z měření proprioceptivních senzorů a který vyvstal jako nejlepší varianta při porovnání několika možných uspořádání takového systému. Lokalizace je poté zpřesněna přidáním měření exteroceptivních senzorů, které zpomalují kumulaci nejistoty určení polohy robotu. Zvláštní pozornost je věnována možným výpadkům jednotlivých senzorických modalit, prokluzům pásů, které u tohoto typu robotů nevyhnutelně nastávají, výpočetním nárokům lokalizačního systému a rozdílným vzorkovacím frekvencím jednotlivých senzorů. Dále se věnujeme problému kinematických modelů pro přejíždění vertikálních překážek, což je další zdroj nepřesnosti při lokalizaci pásového robotu. Díky účasti na výzkumných projektech, jejichž členy byly hasičské sbory Itálie, Německa a Nizozemska, jsme měli přístup na cvičiště určená pro přípravu na zásahy během zemětřesení, průmyslových a dopravních nehod. Přesnost našeho lokalizačního systému jsme tedy testovali v podmínkách, které věrně napodobují ty skutečné. Soubory senzorických měření a referenčních poloh, které jsme vytvořili pro testování přesnosti lokalizace, jsou veřejně dostupné a považujeme je za jeden z přínosů naší práce. Tato dizertační práce má podobu souboru tří časopiseckých publikací a jednoho článku, který je v době jejího podání v recenzním řízení.eployment of mobile robots in search and rescue missions is a way to make job of human rescuers safer and more efficient. Such missions, however, require robots to be resilient to harsh conditions of natural disasters or human-inflicted accidents. They have to operate on unstable rough terrain, in confined spaces or in sensory-deprived environments filled with smoke or dust. Localization, a common task in mobile robotics which involves determining position and orientation with respect to a given coordinate frame, faces these conditions as well. In this thesis, we describe development of a localization system for tracked mobile robot intended for search and rescue missions. We present a proprioceptive 6-degrees-of-freedom localization system, which arose from the experimental comparison of several possible sensor fusion architectures. The system was modified to incorporate exteroceptive velocity measurements, which significantly improve accuracy by reducing a localization drift. A special attention was given to potential sensor outages and failures, to track slippage that inevitably occurs with this type of robots, to computational demands of the system and to different sampling rates sensory data arrive with. Additionally, we addressed the problem of kinematic models for tracked odometry on rough terrains containing vertical obstacles. Thanks to research projects the robot was designed for, we had access to training facilities used by fire brigades of Italy, Germany and Netherlands. Accuracy and robustness of proposed localization systems was tested in conditions closely resembling those seen in earthquake aftermath and industrial accidents. Datasets used to test our algorithms are publicly available and they are one of the contributions of this thesis. We form this thesis as a compilation of three published papers and one paper in review process
    corecore