5 research outputs found

    Enhanced free space detection in multiple lanes based on single CNN with scene identification

    Full text link
    Many systems for autonomous vehicles' navigation rely on lane detection. Traditional algorithms usually estimate only the position of the lanes on the road, but an autonomous control system may also need to know if a lane marking can be crossed or not, and what portion of space inside the lane is free from obstacles, to make safer control decisions. On the other hand, free space detection algorithms only detect navigable areas, without information about lanes. State-of-the-art algorithms use CNNs for both tasks, with significant consumption of computing resources. We propose a novel approach that estimates the free space inside each lane, with a single CNN. Additionally, adding only a small requirement concerning GPU RAM, we infer the road type, that will be useful for path planning. To achieve this result, we train a multi-task CNN. Then, we further elaborate the output of the network, to extract polygons that can be effectively used in navigation control. Finally, we provide a computationally efficient implementation, based on ROS, that can be executed in real time. Our code and trained models are available online.Comment: Will appear in the 2019 IEEE Intelligent Vehicles Symposium (IV 2019

    Unification of road scene segmentation strategies using multistream data and latent space attention

    Get PDF
    DATA AVAILABILITY STATEMENT : Two datasets are references in this paper. The Cityscapes dataset is available in the Cityscapes web repository [21]. The CARLA dataset was custom-recorded from the CARLA simulator [44] and can be obtained from the first author upon request. The main training scripts that were used to create the road scene segmentation model will be made available with this paper.Road scene understanding, as a field of research, has attracted increasing attention in recent years. The development of road scene understanding capabilities that are applicable to realworld road scenarios has seen numerous complications. This has largely been due to the cost and complexity of achieving human-level scene understanding, at which successful segmentation of road scene elements can be achieved with a mean intersection over union score close to 1.0. There is a need for more of a unified approach to road scene segmentation for use in self-driving systems. Previous works have demonstrated how deep learning methods can be combined to improve the segmentation and perception performance of road scene understanding systems. This paper proposes a novel segmentation system that uses fully connected networks, attention mechanisms, and multiple-input data stream fusion to improve segmentation performance. Results show comparable performance compared to previous works, with a mean intersection over union of 87.4% on the Cityscapes dataset.The Centre for Connected Intelligence (CCI) at the University of Pretoria (UP), and the APC was partially funded by CCI and UP.https://www.mdpi.com/journal/sensorsam2024Electrical, Electronic and Computer EngineeringNon
    corecore