12,415 research outputs found

    Global-referenced navigation grids for off-road vehicles and environments

    Full text link
    [EN] The presence of automation and information technology in agricultural environments seems no longer questionable; smart spraying, variable rate fertilizing, or automatic guidance are becoming usual management tools in modern farms. Yet, such techniques are still in their nascence and offer a lively hotbed for innovation. In particular, significant research efforts are being directed toward vehicle navigation and awareness in off-road environments. However, the majority of solutions being developed are based on occupancy grids referenced with odometry and dead-reckoning, or alternatively based on GPS waypoint following, but never based on both. Yet, navigation in off-road environments highly benefits from both approaches: perception data effectively condensed in regular grids, and global references for every cell of the grid. This research proposes a framework to build globally referenced navigation grids by combining three-dimensional stereo vision with satellite-based global positioning. The construction process entails the in-field recording of perceptual information plus the geodetic coordinates of the vehicle at every image acquisition position, in addition to other basic data as velocity, heading, or GPS quality indices. The creation of local grids occurs in real time right after the stereo images have been captured by the vehicle in the field, but the final assembly of universal grids takes place after finishing the acquisition phase. Vehicle-fixed individual grids are then superposed onto the global grid, transferring original perception data to universal cells expressed in Local Tangent Plane coordinates. Global referencing allows the discontinuous appendage of data to succeed in the completion and updating of navigation grids along the time over multiple mapping sessions. This methodology was validated in a commercial vineyard, where several universal grids of the crops were generated. Vine rows were correctly reconstructed, although some difficulties appeared around the headland turns as a consequence of unreliable heading estimations. Navigation information conveyed through globally referenced regular grids turned out to be a powerful tool for upcoming practical implementations within agricultural robotics. (C) 2011 Elsevier B.V. All rights reserved.The author would like to thank Juan Jose Pena Suarez and Montano Perez Teruel for their assistance in the preparation of the prototype vehicle, Veronica Saiz Rubio for her help during most of the field experiments, Ratul Banerjee for his contribution in the development of software, and Luis Gil-Orozco Esteve for granting permission to perform multiple tests in the vineyards of his winery Finca Ardal. Gratitude is also extended to the Spanish Ministry of Science and Innovation for funding this research through project AGL2009-11731.Rovira Más, F. (2011). Global-referenced navigation grids for off-road vehicles and environments. Robotics and Autonomous Systems. 60(2):278-287. https://doi.org/10.1016/j.robot.2011.11.007S27828760

    Autonomous control of underground mining vehicles using reactive navigation

    Get PDF
    Describes how many of the navigation techniques developed by the robotics research community over the last decade may be applied to a class of underground mining vehicles (LHDs and haul trucks). We review the current state-of-the-art in this area and conclude that there are essentially two basic methods of navigation applicable. We describe an implementation of a reactive navigation system on a 30 tonne LHD which has achieved full-speed operation at a production mine

    AutonoVi: Autonomous Vehicle Planning with Dynamic Maneuvers and Traffic Constraints

    Full text link
    We present AutonoVi:, a novel algorithm for autonomous vehicle navigation that supports dynamic maneuvers and satisfies traffic constraints and norms. Our approach is based on optimization-based maneuver planning that supports dynamic lane-changes, swerving, and braking in all traffic scenarios and guides the vehicle to its goal position. We take into account various traffic constraints, including collision avoidance with other vehicles, pedestrians, and cyclists using control velocity obstacles. We use a data-driven approach to model the vehicle dynamics for control and collision avoidance. Furthermore, our trajectory computation algorithm takes into account traffic rules and behaviors, such as stopping at intersections and stoplights, based on an arc-spline representation. We have evaluated our algorithm in a simulated environment and tested its interactive performance in urban and highway driving scenarios with tens of vehicles, pedestrians, and cyclists. These scenarios include jaywalking pedestrians, sudden stops from high speeds, safely passing cyclists, a vehicle suddenly swerving into the roadway, and high-density traffic where the vehicle must change lanes to progress more effectively.Comment: 9 pages, 6 figure

    End-to-end Driving via Conditional Imitation Learning

    Get PDF
    Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at https://youtu.be/cFtnflNe5fMComment: Published at the International Conference on Robotics and Automation (ICRA), 201
    • …
    corecore