12 research outputs found

    Towards End-to-End Lane Detection: an Instance Segmentation Approach

    Full text link
    Modern cars are incorporating an increasing number of driver assist features, among which automatic lane keeping. The latter allows the car to properly position itself within the road lanes, which is also crucial for any subsequent lane departure or trajectory planning decision in fully autonomous cars. Traditional lane detection methods rely on a combination of highly-specialized, hand-crafted features and heuristics, usually followed by post-processing techniques, that are computationally expensive and prone to scalability due to road scene variations. More recent approaches leverage deep learning models, trained for pixel-wise lane segmentation, even when no markings are present in the image due to their big receptive field. Despite their advantages, these methods are limited to detecting a pre-defined, fixed number of lanes, e.g. ego-lanes, and can not cope with lane changes. In this paper, we go beyond the aforementioned limitations and propose to cast the lane detection problem as an instance segmentation problem - in which each lane forms its own instance - that can be trained end-to-end. To parametrize the segmented lane instances before fitting the lane, we further propose to apply a learned perspective transformation, conditioned on the image, in contrast to a fixed "bird's-eye view" transformation. By doing so, we ensure a lane fitting which is robust against road plane changes, unlike existing approaches that rely on a fixed, pre-defined transformation. In summary, we propose a fast lane detection algorithm, running at 50 fps, which can handle a variable number of lanes and cope with lane changes. We verify our method on the tuSimple dataset and achieve competitive results

    Lane marking detection using simple encode decode deep learning technique: SegNet

    Get PDF
    In recent times, many innocent people are suffering from sudden death for the sake of unwanted road accidents, which also riveting a lot of financial properties. The researchers have deployed advanced driver assistance systems (ADAS) in which a large number of automated features have been incorporated in the modern vehicles to overcome human mortality as well as financial loss, and lane markings detection is one of them. Many computer vision techniques and intricate image processing approaches have been used for detecting the lane markings by utilizing the handcrafted with highly specialized features. However, the systems have become more challenging due to the computational complexity, overfitting, less accuracy, and incapability to cope up with the intricate environmental conditions. Therefore, this research paper proposed a simple encode-decode deep learning model to detect lane markings under the distinct environmental condition with lower computational complexity. The model is based on SegNet architecture for improving the performance of the existing researches, which is trained by the lane marking dataset containing different complex environment conditions like rain, cloud, low light, curve roads. The model has successfully achieved 96.38% accuracy, 0.0311 false positive, 0.0201 false negative, 0.960 F1 score with a loss of only 1.45%, less overfitting and 428 ms per step that outstripped some of the existing researches. It is expected that this research will bring a significant contribution to the field lane marking detection

    Gen-LaneNet: A Generalized and Scalable Approach for 3D Lane Detection

    Full text link
    We present a generalized and scalable method, called Gen-LaneNet, to detect 3D lanes from a single image. The method, inspired by the latest state-of-the-art 3D-LaneNet, is a unified framework solving image encoding, spatial transform of features and 3D lane prediction in a single network. However, we propose unique designs for Gen-LaneNet in two folds. First, we introduce a new geometry-guided lane anchor representation in a new coordinate frame and apply a specific geometric transformation to directly calculate real 3D lane points from the network output. We demonstrate that aligning the lane points with the underlying top-view features in the new coordinate frame is critical towards a generalized method in handling unfamiliar scenes. Second, we present a scalable two-stage framework that decouples the learning of image segmentation subnetwork and geometry encoding subnetwork. Compared to 3D-LaneNet, the proposed Gen-LaneNet drastically reduces the amount of 3D lane labels required to achieve a robust solution in real-world application. Moreover, we release a new synthetic dataset and its construction strategy to encourage the development and evaluation of 3D lane detection methods. In experiments, we conduct extensive ablation study to substantiate the proposed Gen-LaneNet significantly outperforms 3D-LaneNet in average precision(AP) and F-score
    corecore