22 research outputs found

    Monocular navigation for long-term autonomy

    Get PDF
    We present a reliable and robust monocular navigation system for an autonomous vehicle. The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS. Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach. In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled. We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound. The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes. This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    Perception for mobile robot navigation: A survey of the state of the art

    Get PDF
    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task

    E-Learning: Case Studies in Web-Controlled Devices and Remote Manipulation

    Get PDF
    Chances are that distance learning will transparently extend colleges and institutes of education and could plausibly overtake and turn into a preferred choice of higher education, especially for adult and working students. The main idea in e-learning is to build adequate solutions that can assure educational training over the Internet, without requiring a personal presence at the degree offering institution. The advantages are immediate and of unique importance, to enumerate a few: Education costs can be reduced dramatically, both from a student's perspective and the institution's (no need for room and board, for example); The tedious immigration and naturalization issues common with international students are eliminated; The limited campus facilities, faculty members and course schedules an institution can offer are no longer a boundary; Working adults can consider upgrading skills without changing their lifestyles We are presenting through this material a sequence of projects developed at University of Bridgeport and than can serve well in distance learning education ranging from simple "hobby" style training to professional guidance material. The projects have an engineering / laboratory flavor and are being presented in an arbitrary order, topics ranging from vision and sensing to engineering design, scheduling, remote control and operation

    Vision-Based Path Following Without Calibration

    Get PDF

    Registracija stereo slika postupkom zasnovanim na RANSAC strategiji s geometrijskim ograničenjem na generiranje hipoteza.

    Get PDF
    An approach for registration of sparse feature sets detected in two stereo image pairs taken from two different views is proposed. Analogously to many existing image registration approaches, our method consists of initial matching of features using local descriptors followed by a RANSAC-based procedure. The proposed approach is especially suitable for cases where there is a high percentage of false initial matches. The strategy proposed in this paper is to modify the hypothesis generation step of the basic RANSAC approach by performing a multiple-step procedure which uses geometric constraints in order to reduce the probability of false correspondences in generated hypotheses. The algorithm needs approximate information about the relative camera pose between the two views. However, the uncertainty of this information is allowed to be rather high. The presented technique is evaluated using both synthetic data and real data obtained by a stereo camera system.U radu je predložen jedan pristup registraciji skupova značajki detektiranih na dva para stereo slika snimljenih iz dva različita pogleda. Slično mnogim postojećim pristupima registraciji slika, predložena se metoda sastoji od početnog sparivanja značajki na temelju lokalnih deskriptora iza kojeg slijedi postupak temeljen na RANSAC-strategiji. Predloženi je pristup posebno prikladan za slučajeve kada rezultat početnog sparivanja sadrži veliki postotak pogreÅ”no sparenih značajki. Strategija koja se predlaže u ovom članku je da se korak RANSAC-algoritma u kojem se slučajnim uzorkovanjem generiraju hipoteze zamijeni postupkom u kojem se hipoteza generira u viÅ”e koraka, pri čemu se u svakom koraku, koriÅ”tenjem odgovarajućih geometrijskih ograničenja, smanjuje vjerojatnost izbora pogreÅ”no sparenih značajki. Algoritam treba približnu informaciju o relativnom položaju kamera između dva pogleda, pri čemu je dopuÅ”tena nesigurnost te informacije prilično velika. Predstavljena strategija je provjerena koriÅ”tenjem sintetičkih podataka te pokusima sa slikama snimljenim pomoću stereo sustava kamera

    Mobile robot navigation using the range-weighted Hough transform

    Full text link

    Curvature-Based Environment Description for Robot Navigation Using Laser Range Sensors

    Get PDF
    This work proposes a new feature detection and description approach for mobile robot navigation using 2D laser range sensors. The whole process consists of two main modules: a sensor data segmentation module and a feature detection and characterization module. The segmentation module is divided in two consecutive stages: First, the segmentation stage divides the laser scan into clusters of consecutive range readings using a distance-based criterion. Then, the second stage estimates the curvature function associated to each cluster and uses it to split it into a set of straight-line and curve segments. The curvature is calculated using a triangle-area representation where, contrary to previous approaches, the triangle side lengths at each range reading are adapted to the local variations of the laser scan, removing noise without missing relevant points. This representation remains unchanged in translation or rotation, and it is also robust against noise. Thus, it is able to provide the same segmentation results although the scene will be perceived from different viewpoints. Therefore, segmentation results are used to characterize the environment using line and curve segments, real and virtual corners and edges. Real scan data collected from different environments by using different platforms are used in the experiments in order to evaluate the proposed environment description algorithm
    corecore