3,056 research outputs found

    Global-referenced navigation grids for off-road vehicles and environments

    Full text link
    [EN] The presence of automation and information technology in agricultural environments seems no longer questionable; smart spraying, variable rate fertilizing, or automatic guidance are becoming usual management tools in modern farms. Yet, such techniques are still in their nascence and offer a lively hotbed for innovation. In particular, significant research efforts are being directed toward vehicle navigation and awareness in off-road environments. However, the majority of solutions being developed are based on occupancy grids referenced with odometry and dead-reckoning, or alternatively based on GPS waypoint following, but never based on both. Yet, navigation in off-road environments highly benefits from both approaches: perception data effectively condensed in regular grids, and global references for every cell of the grid. This research proposes a framework to build globally referenced navigation grids by combining three-dimensional stereo vision with satellite-based global positioning. The construction process entails the in-field recording of perceptual information plus the geodetic coordinates of the vehicle at every image acquisition position, in addition to other basic data as velocity, heading, or GPS quality indices. The creation of local grids occurs in real time right after the stereo images have been captured by the vehicle in the field, but the final assembly of universal grids takes place after finishing the acquisition phase. Vehicle-fixed individual grids are then superposed onto the global grid, transferring original perception data to universal cells expressed in Local Tangent Plane coordinates. Global referencing allows the discontinuous appendage of data to succeed in the completion and updating of navigation grids along the time over multiple mapping sessions. This methodology was validated in a commercial vineyard, where several universal grids of the crops were generated. Vine rows were correctly reconstructed, although some difficulties appeared around the headland turns as a consequence of unreliable heading estimations. Navigation information conveyed through globally referenced regular grids turned out to be a powerful tool for upcoming practical implementations within agricultural robotics. (C) 2011 Elsevier B.V. All rights reserved.The author would like to thank Juan Jose Pena Suarez and Montano Perez Teruel for their assistance in the preparation of the prototype vehicle, Veronica Saiz Rubio for her help during most of the field experiments, Ratul Banerjee for his contribution in the development of software, and Luis Gil-Orozco Esteve for granting permission to perform multiple tests in the vineyards of his winery Finca Ardal. Gratitude is also extended to the Spanish Ministry of Science and Innovation for funding this research through project AGL2009-11731.Rovira Más, F. (2011). Global-referenced navigation grids for off-road vehicles and environments. Robotics and Autonomous Systems. 60(2):278-287. https://doi.org/10.1016/j.robot.2011.11.007S27828760

    Percepção do ambiente urbano e navegação usando visão robótica : concepção e implementação aplicado à veículo autônomo

    Get PDF
    Orientadores: Janito Vaqueiro Ferreira, Alessandro Corrêa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veículos autônomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefícios na redução de acidentes, no aumentando da qualidade de vida e também na redução de custos. Veículos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera têm recebido grande atenção pelo motivo de que eles são de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas também desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veículos podem gerar observações parciais e também estas observações são muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veículo. Nesta tese, este problema de percepção é analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisão em navegação autônoma. Projeta-se um sistema de percepção que permita veículos robóticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prévio do ambiente e considerando a presença de objetos dinâmicos tais como veículos. Propõe-se um novo método baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estéreo, a qual é vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisão no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possíveis caminhos a partir do centro de referencia do veículo e com base nisto, duas novas estratégias são propostas. Em primeiro, uma nova estratégia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual é modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle são implementados e validados experimentalmente em condições reais usando um veículo autônomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic

    AutonoVi: Autonomous Vehicle Planning with Dynamic Maneuvers and Traffic Constraints

    Full text link
    We present AutonoVi:, a novel algorithm for autonomous vehicle navigation that supports dynamic maneuvers and satisfies traffic constraints and norms. Our approach is based on optimization-based maneuver planning that supports dynamic lane-changes, swerving, and braking in all traffic scenarios and guides the vehicle to its goal position. We take into account various traffic constraints, including collision avoidance with other vehicles, pedestrians, and cyclists using control velocity obstacles. We use a data-driven approach to model the vehicle dynamics for control and collision avoidance. Furthermore, our trajectory computation algorithm takes into account traffic rules and behaviors, such as stopping at intersections and stoplights, based on an arc-spline representation. We have evaluated our algorithm in a simulated environment and tested its interactive performance in urban and highway driving scenarios with tens of vehicles, pedestrians, and cyclists. These scenarios include jaywalking pedestrians, sudden stops from high speeds, safely passing cyclists, a vehicle suddenly swerving into the roadway, and high-density traffic where the vehicle must change lanes to progress more effectively.Comment: 9 pages, 6 figure

    Online self-reconfigurable robot navigation in heterogeneous environments

    Get PDF
    This paper presents a robot navigation system capable of online self-reconfiguration according to the needs imposed by the various contexts present in heterogeneous environments. The ability to cope with heterogeneous environments is key for a robust deployment of service robots in truly demanding scenarios. In the proposed system, flexibility is present at the several layers composing the robot's navigation system. At the lowest layer, proper locomotion modes are selected according to the environment's local context. At the highest layer, proper motion and path planning strategies are selected according to the environment's global context. While local context is obtained directly from the robot's sensory input, global context is inspected from semantic labels registered off-line on geo-referenced maps. The proposed system leverages on the well-known Robotics Operating System (ROS) framework for the implementation of the major navigation system components. The system was successfully validated over approximately 1 Km long experiments on INTROBOT, an all-terrain industrial-grade robot equipped with four independently steered wheels.info:eu-repo/semantics/acceptedVersio

    Proximal sensing mapping method to generate field maps in vineyards

    Get PDF
    [EN] An innovative methodology to generate vegetative vigor maps in vineyards (Vitis vinifera L.) has been developed and pre-validated. The architecture proposed implements a Global Positioning System (GPS) receiver and a computer vision unit comprising a monocular charge-coupled device (CCD) camera equipped with an 8-mm lens and a pass-band near-infrared (NIR) filter. Both sensors are mounted on a medium-size conventional agricultural tractor. The synchronization of perception (camera) and localization (GPS) sensors allowed the creation of globally-referenced regular grids, denominated universal grids, whose cells were filled with the estimated vegetative vigor of the monitored vines. Vine vigor was quantified as the relative percentage of vegetation automatically estimated by the onboard algorithm through the images captured with the camera. Validation tests compared spatial differences in vine vigor with yield differentials along the rows. The positive correlation between vigor and yield variations showed the potential of proximal sensing and the advantages of acquiring top view images from conventional vehicles.Sáiz Rubio, V.; Rovira Más, F. (2013). Proximal sensing mapping method to generate field maps in vineyards. Agricultural Engineering International: CIGR Journal. 15(2):47-59. http://hdl.handle.net/10251/102750S475915

    Combining Satellite Images and Cadastral Information for Outdoor Autonomous Mapping and Navigation: A Proof-of-Concept Study in Citric Groves

    Get PDF
    The development of robotic applications for agricultural environments has several problems which are not present in the robotic systems used for indoor environments. Some of these problems can be solved with an efficient navigation system. In this paper, a new system is introduced to improve the navigation tasks for those robots which operate in agricultural environments. Concretely, the paper focuses on the problem related to the autonomous mapping of agricultural parcels (i.e., an orange grove). The map created by the system will be used to help the robots navigate into the parcel to perform maintenance tasks such as weed removal, harvest, or pest inspection. The proposed system connects to a satellite positioning service to obtain the real coordinates where the robotic system is placed. With these coordinates, the parcel information is downloaded from an online map service in order to autonomously obtain a map of the parcel in a readable format for the robot. Finally, path planning is performed by means of Fast Marching techniques using the robot or a team of two robots. This paper introduces the proof-of-concept and describes all the necessary steps and algorithms to obtain the path planning just from the initial coordinates of the robot

    Augmented Perception for Agricultural Robots Navigation

    Full text link
    [EN] Producing food in a sustainable way is becoming very challenging today due to the lack of skilled labor, the unaffordable costs of labor when available, and the limited returns for growers as a result of low produce prices demanded by big supermarket chains in contrast to ever-increasing costs of inputs such as fuel, chemicals, seeds, or water. Robotics emerges as a technological advance that can counterweight some of these challenges, mainly in industrialized countries. However, the deployment of autonomous machines in open environments exposed to uncertainty and harsh ambient conditions poses an important defiance to reliability and safety. Consequently, a deep parametrization of the working environment in real time is necessary to achieve autonomous navigation. This article proposes a navigation strategy for guiding a robot along vineyard rows for field monitoring. Given that global positioning cannot be granted permanently in any vineyard, the strategy is based on local perception, and results from fusing three complementary technologies: 3D vision, lidar, and ultrasonics. Several perception-based navigation algorithms were developed between 2015 and 2019. After their comparison in real environments and conditions, results showed that the augmented perception derived from combining these three technologies provides a consistent basis for outlining the intelligent behavior of agricultural robots operating within orchards.This work was supported by the European Union Research and Innovation Programs under Grant N. 737669 and Grant N. 610953. The associate editor coordinating the review of this article and approving it for publication was Dr. Oleg Sergiyenko.Rovira Más, F.; Sáiz Rubio, V.; Cuenca-Cuenca, A. (2021). Augmented Perception for Agricultural Robots Navigation. IEEE Sensors Journal. 21(10):11712-11727. https://doi.org/10.1109/JSEN.2020.3016081S1171211727211
    • …
    corecore