1,193 research outputs found

    Multi-level decision framework collision avoidance algorithm in emergency scenarios

    Full text link
    With the rapid development of autonomous driving, the attention of academia has increasingly focused on the development of anti-collision systems in emergency scenarios, which have a crucial impact on driving safety. While numerous anti-collision strategies have emerged in recent years, most of them only consider steering or braking. The dynamic and complex nature of the driving environment presents a challenge to developing robust collision avoidance algorithms in emergency scenarios. To address the complex, dynamic obstacle scene and improve lateral maneuverability, this paper establishes a multi-level decision-making obstacle avoidance framework that employs the safe distance model and integrates emergency steering and emergency braking to complete the obstacle avoidance process. This approach helps avoid the high-risk situation of vehicle instability that can result from the separation of steering and braking actions. In the emergency steering algorithm, we define the collision hazard moment and propose a multi-constraint dynamic collision avoidance planning method that considers the driving area. Simulation results demonstrate that the decision-making collision avoidance logic can be applied to dynamic collision avoidance scenarios in complex traffic situations, effectively completing the obstacle avoidance task in emergency scenarios and improving the safety of autonomous driving

    Online Mapping-Based Navigation System for Wheeled Mobile Robot in Road Following and Roundabout

    Get PDF
    A road mapping and feature extraction for mobile robot navigation in road roundabout and road following environments is presented in this chapter. In this work, the online mapping of mobile robot employing the utilization of sensor fusion technique is used to extract the road characteristics that will be used with path planning algorithm to enable the robot to move from a certain start position to predetermined goal, such as road curbs, road borders, and roundabout. The sensor fusion is performed using many sensors, namely, laser range finder, camera, and odometry, which are combined on a new wheeled mobile robot prototype to determine the best optimum path of the robot and localize it within its environments. The local maps are developed using an image’s preprocessing and processing algorithms and an artificial threshold of LRF signal processing to recognize the road environment parameters such as road curbs, width, and roundabout. The path planning in the road environments is accomplished using a novel approach so called Laser Simulator to find the trajectory in the local maps developed by sensor fusion. Results show the capability of the wheeled mobile robot to effectively recognize the road environments, build a local mapping, and find the path in both road following and roundabout

    Infrastructure Enabled Autonomy Acting as an Intelligent Transportation System for Autonomous Cars

    Get PDF
    Autonomous cars have the ability to increase safety, efficiency, and speed of travel. Yet many see a point at which stand-alone autonomous agents populate an area too densely, creating increased risk - particularly when each agent is operating and making decisions on its own and in its own self-interest. The problem at hand then becomes how to best implement and scale this new technology and structure in such a way that it can keep pace with a rapidly changing world, benefitting not just individuals, but societies. This research approaches the challenge by developing an intelligent transportation system that relies on an infrastructure. The solution lies in the removal of sensing and high computational tasks from the vehicles, allowing static ground stations with multi sensor-sensing packs to sense the surrounding environment and direct the vehicles safely from start to goal. On a high level, the Infrastructure Enabled Autonomy system (IEA) uses less hardware, bandwidth, energy, and money to maintain a controlled environment for a vehicle to operate when in highly congested environments. Through the development of background detection algorithms, this research has shown the advantage of static MSSPs analyzing the same environment over time, and carrying an increased reliability from fewer unknowns about the area of interest. It was determined through testing that wireless commands can sufficiently operate a vehicle in a limited agent environment, and do not bottleneck the system. The horizontal trial outcome illustrated that a switching MSSP state of the IEA system showed similar loop time, but a greatly increased standard deviation. However, after performing a t-test with a 95 percent confidence interval, the static and switching MSSP state trials were not significantly different. The final testing quantified the cross track error. For a straight path, the vehicle being controlled by the IEA system had a cross track error less than 12 centimeters, meaning between the controller, network lag, and pixel error, the system was robust enough to generate stable control of the vehicle with minimal error

    LIDAR-Camera Fusion for Road Detection Using Fully Convolutional Neural Networks

    Full text link
    In this work, a deep learning approach has been developed to carry out road detection by fusing LIDAR point clouds and camera images. An unstructured and sparse point cloud is first projected onto the camera image plane and then upsampled to obtain a set of dense 2D images encoding spatial information. Several fully convolutional neural networks (FCNs) are then trained to carry out road detection, either by using data from a single sensor, or by using three fusion strategies: early, late, and the newly proposed cross fusion. Whereas in the former two fusion approaches, the integration of multimodal information is carried out at a predefined depth level, the cross fusion FCN is designed to directly learn from data where to integrate information; this is accomplished by using trainable cross connections between the LIDAR and the camera processing branches. To further highlight the benefits of using a multimodal system for road detection, a data set consisting of visually challenging scenes was extracted from driving sequences of the KITTI raw data set. It was then demonstrated that, as expected, a purely camera-based FCN severely underperforms on this data set. A multimodal system, on the other hand, is still able to provide high accuracy. Finally, the proposed cross fusion FCN was evaluated on the KITTI road benchmark where it achieved excellent performance, with a MaxF score of 96.03%, ranking it among the top-performing approaches

    Drone Obstacle Avoidance and Navigation Using Artificial Intelligence

    Get PDF
    This thesis presents an implementation and integration of a robust obstacle avoidance and navigation module with ardupilot. It explores the problems in the current solution of obstacle avoidance and tries to mitigate it with a new design. With the recent innovation in artificial intelligence, it also explores opportunities to enable and improve the functionalities of obstacle avoidance and navigation using AI techniques. Understanding different types of sensors for both navigation and obstacle avoidance is required for the implementation of the design and a study of the same is presented as a background. A research on an autonomous car is done for better understanding autonomy and learning how it is solving the problem of obstacle avoidance and navigation. The implementation part of the thesis is focused on the design of a robust obstacle avoidance module and is tested with obstacle avoidance sensors such as Garmin lidar and Realsense r200. Image segmentation is used to verify the possibility of using the convolutional neural network for better understanding the nature of obstacles. Similarly, the end to end control with a single camera input using a deep neural network is used for verifying the possibility of using AI for navigation. In the end, a robust obstacle avoidance library is developed and tested both in the simulator and real drone. Image segmentation is implemented, deployed and tested. A possibility of an end to end control is also verified by obtaining a proof of concept

    Autonomous control of underground mining vehicles using reactive navigation

    Get PDF
    Describes how many of the navigation techniques developed by the robotics research community over the last decade may be applied to a class of underground mining vehicles (LHDs and haul trucks). We review the current state-of-the-art in this area and conclude that there are essentially two basic methods of navigation applicable. We describe an implementation of a reactive navigation system on a 30 tonne LHD which has achieved full-speed operation at a production mine
    • …
    corecore