591 research outputs found

    Online Monocular Lane Mapping Using Catmull-Rom Spline

    Full text link
    In this study, we introduce an online monocular lane mapping approach that solely relies on a single camera and odometry for generating spline-based maps. Our proposed technique models the lane association process as an assignment issue utilizing a bipartite graph, and assigns weights to the edges by incorporating Chamfer distance, pose uncertainty, and lateral sequence consistency. Furthermore, we meticulously design control point initialization, spline parameterization, and optimization to progressively create, expand, and refine splines. In contrast to prior research that assessed performance using self-constructed datasets, our experiments are conducted on the openly accessible OpenLane dataset. The experimental outcomes reveal that our suggested approach enhances lane association and odometry precision, as well as overall lane map quality. We have open-sourced our code1 for this project.Comment: Accepted by IROS202

    Prioritizing Roadway Pavement Marking Maintenance Using Lane Keep Assist Sensor Data

    Get PDF
    There are over four million miles of roads in the United States, and the prioritization of locations to perform maintenance activities typically relies on human inspection or semi-automated dedicated vehicles. Pavement markings are used to delineate the boundaries of the lane the vehicle is driving within. These markings are also used by original equipment manufacturers (OEM) for implementing advanced safety features such as lane keep assist (LKA) and eventually autonomous operation. However, pavement markings deteriorate over time due to the fact of weather and wear from tires and snowplow operations. Furthermore, their performance varies depending upon lighting (day/night) as well as surface conditions (wet/dry). This paper presents a case study in Indiana where over 5000 miles of interstate were driven and LKA was used to classify pavement markings. Longitudinal comparisons between 2020 and 2021 showed that the percentage of lanes with both lines detected increased from 80.2% to 92.3%. This information can be used for various applications such as developing or updating standards for pavement marking materials (infrastructure), quantifying performance measures that can be used by automotive OEMs to warn drivers of potential problems with identifying pavement markings, and prioritizing agency pavement marking maintenance activities

    Practical Auto-Calibration for Spatial Scene-Understanding from Crowdsourced Dashcamera Videos

    Full text link
    Spatial scene-understanding, including dense depth and ego-motion estimation, is an important problem in computer vision for autonomous vehicles and advanced driver assistance systems. Thus, it is beneficial to design perception modules that can utilize crowdsourced videos collected from arbitrary vehicular onboard or dashboard cameras. However, the intrinsic parameters corresponding to such cameras are often unknown or change over time. Typical manual calibration approaches require objects such as a chessboard or additional scene-specific information. On the other hand, automatic camera calibration does not have such requirements. Yet, the automatic calibration of dashboard cameras is challenging as forward and planar navigation results in critical motion sequences with reconstruction ambiguities. Structure reconstruction of complete visual-sequences that may contain tens of thousands of images is also computationally untenable. Here, we propose a system for practical monocular onboard camera auto-calibration from crowdsourced videos. We show the effectiveness of our proposed system on the KITTI raw, Oxford RobotCar, and the crowdsourced D2^2-City datasets in varying conditions. Finally, we demonstrate its application for accurate monocular dense depth and ego-motion estimation on uncalibrated videos.Comment: Accepted at 16th International Conference on Computer Vision Theory and Applications (VISAP, 2021

    Belief Space-Guided Navigation for Robots and Autonomous Vehicles

    Get PDF
    Navigating through the environment is a fundamental capability for mobile robots, which is still very challenging today. Most robotic applications these days, such as mining, disaster response, and agriculture, require the robots to move and perform tasks in a variety of environments which are stochastic and sometimes even unpredictable. A robot often cannot directly observe its current state but instead estimates a distribution over the set of possible states based on sensor measurements that are both noisy and partial. The actual robot position differs from its prediction after applying a motion command, due to actuation noise. Classic algorithms for navigation should adapt themselves to where the behavior of the environment is stochastic, and the execution of the motions has great uncertainty. To solve such challenging problems, we propose to guide the robot's navigation in the belief space. Belief space-guided navigation differs fundamentally from planning without uncertainty where the state of the robot is always assumed to be known precisely. The robot senses its environment, estimates its current state due to perception uncertainty, and decides whether a new (or priori) action is appropriate. Based on that determination, it actuates its sensors to move with motion uncertainty in the environment. This inspires us to connect robot perception and motion planning, and reason about the uncertainty to improve the quality of plan so that the robot can follow a collision-free, feasible kinodynamic, and task-optimal trajectory. In this dissertation, we explore the belief space-guided robotic navigation problems, which include belief space-based scene understanding for autonomous vehicles and introduce belief space guided robotic planning. We first investigate how belief space can facilitate scene understanding under the context of lane marking quality assessment in the application of autonomous driving. We propose a new problem by measuring the quality of roads and ensuring they are ready for autonomous driving. We focus on developing three quality metrics for lane markings (LMs), correctness metric, shape metric, and visibility metric, and algorithms to assess LM qualities to facilitate scene understanding. As another example of using belief space for better scene understanding, we utilize crowdsourced images from multiple vehicles to help verify LMs for high-definition (HD) map maintenance. An LM is consistent if belief functions from the map and the image satisfy statistical hypothesis testing. We further extend the Bayesian belief model into a sequential belief update using crowdsourced images. LMs with a higher probability of existence are kept in the HD map whereas those with a lower probability of existence are removed from the HD map. Belief space can also help us to tightly connect perception and motion planning. As an example, we develop a motion planning strategy for autonomous vehicles. Named as virtual lane boundary approach, this framework considers obstacle avoidance, trajectory smoothness (to satisfy vehicle kinodynamic constraints), trajectory continuity (to avoid sudden movements), global positioning system (GPS) following quality (to execute the global plan), and lane following or partial direction following (to meet human expectation). Consequently, vehicle motion is more human-compatible than existing approaches. As another example of how belief space can help guide robots for different tasks, we propose to use it for the probabilistic boundary coverage of unknown target fields (UTFs). We employ Gaussian processes as a local belief function to approximate a field boundary distribution in an ellipse-shaped local region. The local belief function allows us to predict UTF boundary trends and establish an adjacent ellipse for further exploration. The process is governed by a depth-first search process until UTF is approximately enclosed by connected ellipses when the boundary coverage process ends. We formally prove that our boundary coverage process guarantees the enclosure above a given coverage ratio with a preset probability threshold

    Interactive Attention Learning on Detection of Lane and Lane Marking on the Road by Monocular Camera Image

    Get PDF
    Vision-based identification of lane area and lane marking on the road is an indispensable function for intelligent driving vehicles, especially for localization, mapping and planning tasks. However, due to the increasing complexity of traffic scenes, such as occlusion and discontinuity, detecting lanes and lane markings from an image captured by a monocular camera becomes persistently challenging. The lanes and lane markings have a strong position correlation and are constrained by a spatial geometry prior to the driving scene. Most existing studies only explore a single task, i.e., either lane marking or lane detection, and do not consider the inherent connection or exploit the modeling of this kind of relationship between both elements to improve the detection performance of both tasks. In this paper, we establish a novel multi-task encoder–decoder framework for the simultaneous detection of lanes and lane markings. This approach deploys a dual-branch architecture to extract image information from different scales. By revealing the spatial constraints between lanes and lane markings, we propose an interactive attention learning for their feature information, which involves a Deformable Feature Fusion module for feature encoding, a Cross-Context module as information decoder, a Cross-IoU loss and a Focal-style loss weighting for robust training. Without bells and whistles, our method achieves state-of-the-art results on tasks of lane marking detection (with 32.53% on IoU, 81.61% on accuracy) and lane segmentation (with 91.72% on mIoU) of the BDD100K dataset, which showcases an improvement of 6.33% on IoU, 11.11% on accuracy in lane marking detection and 0.22% on mIoU in lane detection compared to the previous methods
    • …
    corecore