123 research outputs found

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    Adaptive Multi-sensor Perception for Driving Automation in Outdoor Contexts

    Get PDF
    In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system

    Radar-Only Off-Road Local Navigation

    Full text link
    Off-road robotics have traditionally utilized lidar for local navigation due to its accuracy and high resolution. However, the limitations of lidar, such as reduced performance in harsh environmental conditions and limited range, have prompted the exploration of alternative sensing technologies. This paper investigates the potential of radar for off-road local navigation, as it offers the advantages of a longer range and the ability to penetrate dust and light vegetation. We adapt existing lidar-based methods for radar and evaluate the performance in comparison to lidar under various off-road conditions. We show that radar can provide a significant range advantage over lidar while maintaining accuracy for both ground plane estimation and obstacle detection. And finally, we demonstrate successful autonomous navigation at a speed of 2.5 m/s over a path length of 350 m using only radar for ground plane estimation and obstacle detection.Comment: 7 pages, 17 figures, ITSC 202

    Multitask Learning for Scalable and Dense Multilayer Bayesian Map Inference

    Full text link
    This article presents a novel and flexible multitask multilayer Bayesian mapping framework with readily extendable attribute layers. The proposed framework goes beyond modern metric-semantic maps to provide even richer environmental information for robots in a single mapping formalism while exploiting intralayer and interlayer correlations. It removes the need for a robot to access and process information from many separate maps when performing a complex task, advancing the way robots interact with their environments. To this end, we design a multitask deep neural network with attention mechanisms as our front-end to provide heterogeneous observations for multiple map layers simultaneously. Our back-end runs a scalable closed-form Bayesian inference with only logarithmic time complexity. We apply the framework to build a dense robotic map including metric-semantic occupancy and traversability layers. Traversability ground truth labels are automatically generated from exteroceptive sensory data in a self-supervised manner. We present extensive experimental results on publicly available datasets and data collected by a 3D bipedal robot platform and show reliable mapping performance in different environments. Finally, we also discuss how the current framework can be extended to incorporate more information such as friction, signal strength, temperature, and physical quantity concentration using Gaussian map layers. The software for reproducing the presented results or running on customized data is made publicly available

    A comprehensive survey of unmanned ground vehicle terrain traversability for unstructured environments and sensor technology insights

    Get PDF
    This article provides a detailed analysis of the assessment of unmanned ground vehicle terrain traversability. The analysis is categorized into terrain classification, terrain mapping, and cost-based traversability, with subcategories of appearance-based, geometry-based, and mixed-based methods. The article also explores the use of machine learning (ML), deep learning (DL) and reinforcement learning (RL) and other based end-to-end methods as crucial components for advanced terrain traversability analysis. The investigation indicates that a mixed approach, incorporating both exteroceptive and proprioceptive sensors, is more effective, optimized, and reliable for traversability analysis. Additionally, the article discusses the vehicle platforms and sensor technologies used in traversability analysis, making it a valuable resource for researchers in the field. Overall, this paper contributes significantly to the current understanding of traversability analysis in unstructured environments and provides insights for future sensor-based research on advanced traversability analysis

    Vision based environment perception system for next generation off-road ADAS : innovation report

    Get PDF
    Advanced Driver Assistance Systems (ADAS) aids the driver by providing information or automating the driving related tasks to improve driver comfort, reduce workload and improve safety. The vehicle senses its external environment using sensors, building a representation of the world used by the control systems. In on-road applications, the perception focuses on establishing the location of other road participants such as vehicles and pedestrians and identifying the road trajectory. Perception in the off-road environment is more complex, as the structure found in urban environments is absent. Off-road perception deals with the estimation of surface topography and surface type, which are the factors that will affect vehicle behaviour in unstructured environments. Off-road perception has seldom been explored in automotive context. For autonomous off-road driving, the perception solutions are primarily related to robotics and not directly applicable in the ADAS domain due to the different goals of unmanned autonomous systems, their complexity and the cost of employed sensors. Such applications consider only the impact of the terrain on the vehicle safety and progress but do not account for the driver comfort and assistance. This work addresses the problem of processing vision sensor data to extract the required information about the terrain. The main focus of this work is on the perception task with the constraints of automotive sensors and the requirements of the ADAS systems. By providing a semantic representation of the off-road environment including terrain attributes such as terrain type, description of the terrain topography and surface roughness, the perception system can cater for the requirements of the next generation of off-road ADAS proposed by Land Rover. Firstly, a novel and computationally efficient terrain recognition method was developed. The method facilitates recognition of low friction grass surfaces in real-time with high accuracy, by applying machine learning Support Vector Machine with illumination invariant normalised RGB colour descriptors. The proposed method was analysed and its performance was evaluated experimentally in off-road environments. Terrain recognition performance was evaluated on a variety of different surface types including grass, gravel and tarmac, showing high grass detection performance with accuracy of 97%. Secondly, a terrain geometry identification method was proposed which facilitates semantic representation of the terrain in terms of macro terrain features such as slopes, crest and ditches. The terrain geometry identification method processes 3D information reconstructed from stereo imagery and constructs a compact grid representation of the surface topography. This representation is further processed to extract object representation of slopes, ditches and crests. Thirdly, a novel method for surface roughness identification was proposed. The surface roughness descriptor is then further used to recommend a vehicle velocity, which will maintain passenger comfort. Surface roughness is described by the Power Spectral Density of the surface profile which correlates with the acceleration experienced by the vehicle. The surface roughness descriptor is then mapped onto vehicle speed recommendation so that the speed of the vehicle can be adapted in anticipation of the surface roughness. Terrain geometry and surface roughness identification performance were evaluated on a range of off-road courses with varying topology showing the capability of the system to correctly identify terrain features up to 20 m ahead of the vehicle and analyse surface roughness up to 15 m ahead of the vehicle. The speed was recommended correctly within +/- 5 kph. Further, the impact of the perception system on the speed adaptation was evaluated, showing the improvements in speed adaptation allowing for greater passenger comfort. The developed perception components facilitated the development of new off-road ADAS systems and were successfully applied in prototype vehicles. The proposed off-road ADAS are planned to be introduced in future generations of Land Rover products. The benefits of this research also included new Intellectual Property generated for Jaguar Land Rover. In the wider context, the enhanced off-road perception capability may facilitate further development of off-road automated driving and off-road autonomy within the constraints of the automotive platfor

    Vision based obstacle detection for all-terrain robots

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de ComputadoresThis dissertation presents a solution to the problem of obstacle detection in all-terrain environments,with particular interest for mobile robots equipped with a stereo vision sensor. Despite the advantages of vision, over other kind of sensors, such as low cost, light weight and reduced energetic footprint, its usage still presents a series of challenges. These include the difficulty in dealing with the considerable amount of generated data, and the robustness required to manage high levels of noise. Such problems can be diminished by making hard assumptions, like considering that the terrain in front of the robot is planar. Although computation can be considerably saved, such simplifications are not necessarily acceptable in more complex environments, where the terrain may be considerably uneven. This dissertation proposes to extend a well known obstacle detector that relaxes the aforementioned planar terrain assumption, thus rendering it more adequate for unstructured environments. The proposed extensions involve: (1) the introduction of a visual saliency mechanism to focus the detection in regions most likely to contain obstacles; (2) voting filters to diminish sensibility to noise; and (3) the fusion of the detector with a complementary method to create a hybrid solution, and thus, more robust. Experimental results obtained with demanding all-terrain images show that, with the proposed extensions, an increment in terms of robustness and computational efficiency over the original algorithm is observe

    Assessment of simulated and real-world autonomy performance with small-scale unmanned ground vehicles

    Get PDF
    Off-road autonomy is a challenging topic that requires robust systems to both understand and navigate complex environments. While on-road autonomy has seen a major expansion in recent years in the consumer space, off-road systems are mostly relegated to niche applications. However, these applications can provide safety and navigation to dangerous areas that are the most suited for autonomy tasks. Traversability analysis is at the core of many of the algorithms employed in these topics. In this thesis, a Clearpath Robotics Jackal vehicle is equipped with a 3D Ouster laser scanner to define and traverse off-road environments. The Mississippi State University Autonomous Vehicle Simulator (MAVS) and the Navigating All Terrains Using Robotic Exploration (NATURE) autonomy stack are used in conjunction with the small-scale vehicle platform to traverse uneven terrain and collect data. Additionally, the NATURE stack is used as a point of comparison between a MAVS simulated and physical Clearpath Robotics Jackal vehicle in testing

    Improving the mobility performance of autonomous unmanned ground vehicles by adding the ability to 'Sense/Feel' their local environment.

    Get PDF
    This paper follows on from earlier work detailed in output one and critically reviews the sensor technologies used in autonomous vehicles, including robots, to ascertain the physical properties of the environment including terrain sensing. The paper reports on a comprehensive study done in terrain types and how these could be determined and the appropriate sensor technologies that can be used. It also reports on work currently in progress in applying these sensor technologies and gives details of a prototype system built at Middlesex University on a reconfigurable mobility system, demonstrating the success of the proposed strategies. This full paper was subject to a blind refereed review process and presented at the 12th HCI International 2007, Beijing, China, incorporating 8 other international thematic conferences. The conference involved over 250 parallel sessions and was attended by 2000 delegates. The conference proceedings are published by Springer in a 17 volume paperback book edition in the Lecture Notes in Computer Science series (LNCS). These are available on-line through the LNCS Digital Library, readily accessible by all subscribing libraries around the world, published in the proceedings of the Second International Conference on Virtual Reality, ICVR 2007, held as Part of HCI International 2007, Beijing, China, July 22-27, 2007. It is also published as a collection of 81 papers in Lecture Notes in Computer Science Series by Springer
    corecore