5 research outputs found

    Autonomous navigation of a mobile robot using inertial and visual cues

    Get PDF
    International audienceThis paper describes the development and implementation of a reactive visual module utilized on an autonomous mobile robot to automatically correct its trajectory. We use a multisensorial mechanism based on inertial and visual cues. We report here only on the implementation and the experimentation of this module, whereas the main theoretical aspects have been developed elsewhere

    3D-Odometry for rough terrain - Towards real 3D navigation

    Get PDF
    Up to recently autonomous mobile robots were mostly designed to run within an indoor, yet partly structured and flat, environment. In rough terrain many problems arise and position tracking becomes more difficult. The robot has to deal with wheel slippage and large orientation changes. In this paper we will first present the recent developments on the off-road rover Shrimp. Then a new method, called 3D-Odometry, which extends the standard 2D odometry to the 3D space will be developed. Since it accounts for transitions, the 3D-Odometry provides better position estimates. It will certainly help to go towards real 3D navigation for outdoor robots

    Smart Localization Using a New Sensor Association Framework for Outdoor Augmented Reality Systems

    Get PDF
    Augmented Reality (AR) aims at enhancing our the real world, by adding fictitious elements that are not perceptible naturally such as: computer-generated images, virtual objects, texts, symbols, graphics, sounds, and smells. The quality of the real/virtual registration depends mainly on the accuracy of the 3D camera pose estimation. In this paper, we present an original real-time localization system for outdoor AR which combines three heterogeneous sensors: a camera, a GPS, and an inertial sensor. The proposed system is subdivided into two modules: the main module is vision based; it estimates the user’s location using a markerless tracking method. When the visual tracking fails, the system switches automatically to the secondary localization module composed of the GPS and the inertial sensor

    Real time correlation-based stereo: algorithm, implementations and applications

    Get PDF
    This paper describes some of the work on stereo that has been going on at INRIA in the last four years. The work has concentrated on obtaining dense, accurate and reliable range maps of the environment at rates compatible with the real-time constraints of such applications as the navigation of mobile vehicles in man-made or natural environments. The class of algorithms which has been selected among several is the class of algorithms which has been selected among several is the class of correlation-based stereo algorithms because they are the only ones that can produce sufficiently dense range maps with an algoritmic structure which lends itself nicely to fast implementations because of the simplicity of the underlying computation. We describe the various improvements that we have brought to the original idea, including validation and characterization of the quality of the matches, a recursive implementation of the score computation which makes the method independent of the size of the correlation window and a calibration method which does not require the use of a calibration pattern. We then describe two implementations of this algorithm on two very different pieces of hardware. The first implementation is on a board with four digital signal processors designed jointly with Matra MSII. This implementation can produce 64x64 range maps at rate varying between 200 and 400 ms, depending upon the range of disparities. The second implementation is on a board developed by DEC-PRL and can perform the cross-correlation of two 256X256 images in 140 ms. The first implementation has been integrated in the navigation system of the INRIA cart and used to correct for inertial and odometric errors in navigation experiments both indoors and outdoors on road. This is the first application of our correlation-based algorithm which is described in the paper. The second application has been done jointly with people from the french national space agence (CNES) to study the possibility of using stereo on a future planetary rover for the construction of digital elevation maps. We have shown that real time stereo is possible today at low-cost and can be applied in real applications. The algorithm that has been described is not the most sophisticated available but we have made it robust and reliable thanks to a number of improvements. Evan though each of these improvements is not earth-shattering from the pure research point of view, altogether they have allowed us to go beyond a very important threshold. This threshold measures the difference between a program that runs in the laboratory on a few images and one that works continuously for hours on a sequence of stereo pairs and produces results at such rates and of such quality that they can be used to guide a real vehicle or to produce discrete elevation maps. We believe that this threshold has only been reached in a very small number of cases

    3D position tracking for all-terrain robots

    Get PDF
    Rough terrain robotics is a fast evolving field of research and a lot of effort is deployed towards enabling a greater level of autonomy for outdoor vehicles. Such robots find their application in scientific exploration of hostile environments like deserts, volcanoes, in the Antarctic or on other planets. They are also of high interest for search and rescue operations after natural or artificial disasters. The challenges to bring autonomy to all terrain rovers are wide. In particular, it requires the development of systems capable of reliably navigate with only partial information of the environment, with limited perception and locomotion capabilities. Amongst all the required functionalities, locomotion and position tracking are among the most critical. Indeed, the robot is not able to fulfill its task if an inappropriate locomotion concept and control is used, and global path planning fails if the rover loses track of its position. This thesis addresses both aspects, a) efficient locomotion and b) position tracking in rough terrain. The Autonomous System Lab developed an off-road rover (Shrimp) showing excellent climbing capabilities and surpassing most of the existing similar designs. Such an exceptional climbing performance enables an extension in the range of possible areas a robot could explore. In order to further improve the climbing capabilities and the locomotion efficiency, a control method minimizing wheel slip has been developed in this thesis. Unlike other control strategies, the proposed method does not require the use of soil models. Independence from these models is very significant because the ability to operate on different types of soils is the main requirement for exploration missions. Moreover, our approach can be adapted to any kind of wheeled rover and the processing power needed remains relatively low, which makes online computation feasible. In rough terrain, the problem of tracking the robot's position is tedious because of the excessive variation of the ground. Further, the field of view can vary significantly between two data acquisition cycles. In this thesis, a method for probabilistically combining different types of sensors to produce a robust motion estimation for an all-terrain rover is presented. The proposed sensor fusion scheme is flexible in that it can easily accommodate any number of sensors, of any kind. In order to test the algorithm, we have chosen to use the following sensory inputs for the experiments: 3D-Odometry, inertial measurement unit (accelerometers, gyros) and visual odometry. The 3D-Odometry has been specially developed in the framework of this research. Because it accounts for ground slope discontinuities and the rover kinematics, this technique results in a reasonably precise 3D motion estimate in rough terrain. The experiments provided excellent results and proved that the use of complementary sensors increases the robustness and accuracy of the pose estimate. In particular, this work distinguishes itself from other similar research projects in the following ways: the sensor fusion is performed with more than two sensor types and sensor fusion is applied a) in rough terrain and b) to track the real 3D pose of the rover. Another result of this work is the design of a high-performance platform for conducting further research. In particular, the rover is equipped with two computers, a stereovision module, an omnidirectional vision system, an inertial measurement unit, numerous sensors and actuators and electronics for power management. Further, a set of powerful tools has been developed to speed up the process of debugging algorithms and analyzing data stored during the experiments. Finally, the modularity and portability of the system enables easy adaptation of new actuators and sensors. All these characteristics speed up the research in this field
    corecore