4 research outputs found

    MOZARD: Multi-Modal Localization for Autonomous Vehicles in Urban Outdoor Environments

    Full text link
    Visually poor scenarios are one of the main sources of failure in visual localization systems in outdoor environments. To address this challenge, we present MOZARD, a multi-modal localization system for urban outdoor environments using vision and LiDAR. By extending our preexisting key-point based visual multi-session local localization approach with the use of semantic data, an improved localization recall can be achieved across vastly different appearance conditions. In particular we focus on the use of curbstone information because of their broad distribution and reliability within urban environments. We present thorough experimental evaluations on several driving kilometers in challenging urban outdoor environments, analyze the recall and accuracy of our localization system and demonstrate in a case study possible failure cases of each subsystem. We demonstrate that MOZARD is able to bridge scenarios where our previous work VIZARD fails, hence yielding an increased recall performance, while a similar localization accuracy of 0.2m is achieve

    Evaluating the Performance of a Visual Support System for Driving Assistance using a Deep Learning Algorithm

    Get PDF
    The issue of road accidents endangering human life has become a global concern due to the rise in traffic volumes. This article presents the evaluation of an object detection model for University of Malaysia Pahang (UMP) roadside conditions, focusing on the detection of vehicles, motorcycles, and traffic lamps. The dataset consists of the driving distance from Hospital Pekan to the University of Malaysia Pahang. Around one thousand images were selected in Roboflow for the train dataset. The model utilises the YOLO V8 deep learning algorithm in the Google Colab environment and is trained using a custom dataset managed by the Roboflow dataset manager. The dataset comprises a diverse set of training and validation images, capturing the unique characteristics of Malaysian roads. The train model's performance was assessed using the F1 score, precision, and recall, with results of 71%, 88.2%, and 84%, respectively. A comprehensive comparison with validation results has shown the efficacy of the proposed model in accurately detecting vehicles, motorcycles, and traffic lamps in real-world Malaysian road scenarios. This study contributes to the improvement of intelligent transportation systems and road safety in Malaysia
    corecore