20,604 research outputs found

    Robotic weeding of a maize field based on navigation data of the tractor that performed the seeding (Preprint)

    Get PDF
    This research presents robotic weeding of a maize field based on navigation data of the tractor that performed the seeding. The availability of tractors equipped with RTK-DGPS based automatic guidance potentially enables robots to perform subsequent tasks in the same field. In an experiment a tractor guidance system generated a route for sowing based on an initial path consisting of two logged positions (A-B line) and then planned the subsequent paths parallel to the initial path one working width apart. After sowing the maize, the A-B line was transferred to the Intelligent Autonomous Weeder (IAW) of Wageningen University. The IAW generated a route plan based on this A-B line and eight coordinates defining the borders of the field and the two headlands. It then successfully performed autonomous weeding of the entire field except of the headlands. The row width was 75 cm and the width of the hoes mounted on the robot was 50 cm. The results show that it is possible to perform robot weeding at field level with high accuracy based on navigation data of the tractor that performed the sowin

    Robot Goes Back Home Despite All the People

    Full text link
    We have developed a navigation system for a mobile robot that enables it to autonomously return to a start point after completing a route. It works efficiently even in complex, low structured and populated indoor environments. A point-based map of the environment is built as the robot explores new areas; it is employed for localization and obstacle avoidance. Points corresponding to dynamical objects are removed from the map so that they do not affect navigation in a wrong way. The algorithms and results we deem more relevant are explained in the paper

    PERFORMANCE EVALUATION OF OUTDOOR NAVIGATION ALGORITHMS FOR THE WHEELCHAIR ROBOT

    Get PDF
    This paper proposes navigation algorithms for mobile robot through the odometry approach. The proposed algorithms include the odometry-based algorithm which uses only odometry calculated from robot motions, and the visual-assisted algorithm that applies visual data to assist in the navigation. The visual-assisted algorithm takes the convolutional neural network with regression setups in addition to the odometry. Goal of the visual-assisted algorithm help localize the robot in navigation by recognizing the scene using camera images. Navigation algorithms are tested for outdoor navigation tasks in the specified route. The experiments consist of two situations for navigation on the same route: with obstacles and without obstacles. Experimental results state that the navigation using only odometry is sufficient for navigation in the experimental environments. The visual-assisted algorithm is proved to be an interesting alternative way of improvement for odometry, in which a large number of improvements and optimizations for visual techniques of outdoor robot navigation are still available to be studied and implemented further

    Sketch-based navigation for mobile robots using qualitative landmark states

    Get PDF
    The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file.Title from title screen of research.pdf file (viewed on September 19, 2007)Vita.Thesis (Ph. D.) University of Missouri-Columbia 2007.In this work, a system for navigating a mobile robot along a sketched route is proposed. The sketch is drawn on a PDA screen by a human operator and contains approximate landmarks and a path, similar to a sketch provided to another person to reach a goal. The robot receives the sketch and detects the objects and route. The robot proceeds to extract spatial relations between the robot and surrounding objects at crucial nodes along the sketched route. Based on the extracted spatial relations, a sequence of Qualitative Landmark States (QLS's) and associated robot commands serves as a guide for robot navigation in the real world. The robot then executes the sketched route by matching landmark states in the real world to the extracted states. The approach is validated and tested using sketches by independent study participants both with a real robot and in a simulator. Special sketches and robot operating environments are used to illustrate results in extreme cases and to independently test extraction, identification and matching of QLS's. We show that QLS's based on spatial relations can be used as a common route representation between a sketched route map and a physical environment. The selection of QLS's is crucial to the success of such an approach and the algorithm shows a way to pick the correct states for successful navigation. The approach is not dependent on the number or type of sensors on the robot and does not assume a particular type of robot; the strategy can work with any sensory method that can provide an object representation in two dimensions (top view). The approach is not dependent on the route chosen, the size or shape of objects, or the position of the objects. The algorithm can account for a certain degree of uncertainty and inconsistencies in sketching (scaling of object size and position, distortions, completeness of object representation.Includes bibliographical reference

    Insect-inspired visual navigation on-board an autonomous robot: real-world routes encoded in a single layer network

    Get PDF
    Insect-Inspired models of visual navigation, that operate by scanning for familiar views of the world, have been shown to be capable of robust route navigation in simulation. These familiarity-based navigation algorithms operate by training an artificial neural network (ANN) with views from a training route, so that it can then output a familiarity score for any new view. In this paper we show that such an algorithm – with all computation performed on a small low-power robot – is capable of delivering reliable direction information along real-world outdoor routes, even when scenes contain few local landmarks and have high-levels of noise (from variable lighting and terrain). Indeed, routes can be precisely recapitulated and we show that the required computation and storage does not increase with the number of training views. Thus the ANN provides a compact representation of the knowledge needed to traverse a route. In fact, rather than losing information, there are instances where the use of an ANN ameliorates the problems of sub optimal paths caused by tortuous training routes. Our results suggest the feasibility of familiarity-based navigation for long-range autonomous visual homing

    Exploring haptic interfacing with a mobile robot without visual feedback

    Get PDF
    Search and rescue scenarios are often complicated by low or no visibility conditions. The lack of visual feedback hampers orientation and causes significant stress for human rescue workers. The Guardians project [1] pioneered a group of autonomous mobile robots assisting a human rescue worker operating within close range. Trials were held with fire fighters of South Yorkshire Fire and Rescue. It became clear that the subjects by no means were prepared to give up their procedural routine and the feel of security they provide: they simply ignored instructions that contradicted their routines

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri
    corecore