9 research outputs found

    Vehicle Localization Based on Visual Lane Marking and Topological Map Matching

    Get PDF
    International audienceAccurate and reliable localization is crucial to autonomous vehicle navigation and driver assistance systems. This paper presents a novel approach for online vehicle localization in a digital map. Two distinct map matching algorithms are proposed: i) Iterative Closest Point (ICP) based lane level map matching is performed with visual lane tracker and grid map ii) decision-rule based approach is used to perform topological map matching. Results of both the map matching algorithms are fused together with GPS and dead reckoning using Extended Kalman Filter to estimate vehicle's pose relative to the map. The proposed approach has been validated on real life conditions on an equipped vehicle. Detailed analysis of the experimental results show improved localization using the two aforementioned map matching algorithm

    End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners

    Full text link
    For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.Comment: to be published at ECCV 201

    Development of a ground robot for indoor SLAM using Low‐Cost LiDAR and remote LabVIEW HMI

    Get PDF
    The simultaneous localization and mapping problem (SLAM) is crucial to autonomous navigation and robot mapping. The main purpose of this thesis is to develop a ground robot that implements SLAM to test the performance of the low‐cost RPLiDAR A1M8 by DFRobot. The HectorSLAM package, available in ROS was used with a Raspberry Pi to implement SLAM and build maps. These maps are sent to a remote desktop via TCP/IP communication to be displayed on a LabVIEW HMI where the user can also control robot. The LabVIEW HMI and the project in its entirety is intended to be as easy to use as possible to the layman, with many processes being automated to make this possible. The quality of the maps created by HectorSLAM and the RPLiDAR were evaluated both qualitatively and quanitatively to determine how useful the low‐cost LiDAR can be for this application. It is hoped that the apparatus developed in this project will be used with drones in the future for 3D mapping

    Mapping and localization using GPS, lane markings and proprioceptive sensors

    No full text
    International audienceEstimating the pose in real-time is a primary function for intelligent vehicle navigation. Whilst different solutions exist, most of them rely on the use of high-end sensors. This paper proposes a solution that exploits an automotive type L1-GPS receiver, features extracted by low-cost perception sensors and vehicle proprioceptive information. A key idea is to use the lane detection function of a video camera to retrieve accurate lateral and orientation information with respect to road lane markings. To this end, lane markings are mobile-mapped by the vehicle itself during a first stage by using an accurate localizer. Then, the resulting map allows for the exploitation of camera-detected features for autonomous real-time localization. The results are then combined with GPS estimates and dead-reckoning sensors in order to provide localization information with high availability. As L1-GPS errors can be large and are time correlated, we study in the paper several GPS error models that are experimentally tested with shaping filters. The approach demonstrates that the use of low-cost sensors with adequate data-fusion algorithms should lead to computer-controlled guidance functions in complex road networks

    Recognizing Features in Mobile Laser Scanning Point Clouds Towards 3D High-definition Road Maps for Autonomous Vehicles

    Get PDF
    The sensors mounted on a driverless vehicle are not always reliable for precise localization and navigation. By comparing the real-time sensory data with a priori map, the autonomous navigation system can transform the complicated sensor perception mission into a simple map-based localization task. However, the lack of robust solutions and standards for creating such lane-level high-definition road maps is a major challenge in this emerging field. This thesis presents a semi-automated method for extracting meaningful road features from mobile laser scanning (MLS) point clouds and creating 3D high-definition road maps for autonomous vehicles. After pre-processing steps including coordinate system transformation and non-ground point removal, a road edge detection algorithm is performed to distinguish road curbs and extract road surfaces followed by extraction of two categories of road markings. On the one hand, textual and directional road markings including arrows, symbols, and words are detected by intensity thresholding and conditional Euclidean clustering. On the other hand, lane markings (lines) are extracted by local intensity analysis and distance thresholding according to road design standards. Afterwards, centerline points in every single lane are estimated based on the position of the extracted lane markings. Ultimately, 3D road maps with precise road boundaries, road markings, and the estimated lane centerlines are created. The experimental results demonstrate the feasibility of the proposed method, which can accurately extract most road features from the MLS point clouds. The average recall, precision, and F1-score obtained from four datasets for road marking extraction are 93.87%, 93.76%, and 93.73%, respectively. All of the estimated lane centerlines are validated using the “ground truthing” data manually digitized from the 4 cm resolution UAV orthoimages. The results of a comparison study show the better performance of the proposed method than that of some other existing methods

    Indoor Localisation of Scooters from Ubiquitous Cost-Effective Sensors: Combining Wi-Fi, Smartphone and Wheel Encoders

    Get PDF
    Indoor localisation of people and objects has been a focus of research studies for several decades because of its great advantage to several applications. Accuracy has always been a challenge because of the uncertainty of the employed sensors. Several technologies have been proposed and researched, however, accuracy still represents an issue. Today, several sensor technologies can be found in indoor environments, some of which are economical and powerful, such as Wi-Fi. Meanwhile, Smartphones are typically present indoors because of the people that carry them along, while moving about within rooms and buildings. Furthermore, vehicles such as mobility scooters can also be present indoor to support people with mobility impairments, which may be equipped with low-cost sensors, such as wheel encoders. This thesis investigates the localisation of mobility scooters operating indoor. This represents a specific topic as most of today's indoor localisation systems are for pedestrians. Furthermore, accurate indoor localisation of those scooters is challenging because of the type of motion and specific behaviour. The thesis focuses on improving localisation accuracy for mobility scooters and on the use of already available indoor sensors. It proposes a combined use of Wi-Fi, Smartphone IMU and wheel encoders, which represents a cost-effective energy-efficient solution. A method has been devised and a system has been developed, which has been experimented on different environment settings. The outcome of the experiments are presented and carefully analysed in the thesis. The outcome of several trials demonstrates the potential of the proposed solutions in reducing positional errors significantly when compared to the state-of-the-art in the same area. The proposed combination demonstrated an error range of 0.35m - 1.35m, which can be acceptable in several applications, such as some related to assisted living. 3 As the proposed system capitalizes on the use of ubiquitous technologies, it opens up to a potential quick take up from the market, therefore being of great benefit for the target audience
    corecore