Warning system in smart vehicles for detection of traffic signs, lights, and obstacles

Abstract

Vehicle accidents can occur due to driver negligence and failure to follow traffic rules. For safe driving, the driver needs to be aware of different situations occurring on the road while driving. To improve a driver’s ability to maintain their focus and reduce accidents, this research proposes to enhance an automated driver early warning system. Such a system would alert the driver whenever a traffic object is detected. This research specifically focuses on improving detection of traffic signs, signals, pedestrians, and obstacles. In this work, a pre-processing enhancement with ESRGAN (Enhanced Super Resolution Generative Adversarial Network) is employed for enhancing the contrast and resolution of the extracted traffic objects. Many advancements have been made in object detection and image processing, but vehicle systems identifying the traffic signs, lights, pedestrians, and obstacles still need improvement. This research also compares the accuracy of two algorithms, You Only Look Once version 3 (YOLOv3) and You Only Look Once version 4 (YOLOv4) with and without the preprocessing ESRGAN enhancement. This research would facilitate advancing automatic systems in autonomous vehicles for sensing traffic objects in real-time. The main objective of this research is to increase the performance of traffic object detection in terms of accuracy, and compare deep learning techniques to see which would result in better performance. The proposed enhancement in this research is applying preprocessing using ESRGAN to the traffic objects, which increases the accuracy of the traffic object detection in different conditions such as nighttime, rain, or extreme weather conditions. The enhanced YOLOv3 model was found to have an average accuracy of 92.83% for traffic object detection, compared to only 87.11% for the unenhanced YOLOv3 model, which is a 5.72% improvement. The enhanced YOLOv4 model was found to have an average accuracy of 94.16% for traffic object detection, compared to only 87.32% for the unenhanced YOLOv4 model, which is a 6.84% improvement

Similar works

Full text

thumbnail-image

Texas A&M University-Kingsville: AKM Digital Repository

redirect
Last time updated on 10/02/2024

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.