57,776 research outputs found

    Domain Adaptation For Vehicle Detection In Traffic Surveillance Images From Daytime To Nighttime

    Get PDF
    Vehicle detection in traffic surveillance images is an important approach to obtain vehicle data and rich traffic flow parameters. Recently, deep learning based methods have been widely used in vehicle detection with high accuracy and efficiency. However, deep learning based methods require a large number of manually labeled ground truths (bounding box of each vehicle in each image) to train the Convolutional Neural Networks (CNN). In the modern urban surveillance cameras, there are already many manually labeled ground truths in daytime images for training CNN, while there are little or much less manually labeled ground truths in nighttime images. In this paper, we focus on the research to make maximum usage of labeled daytime images (Source Domain) to help the vehicle detection in unlabeled nighttime images (Target Domain). For this purpose, we propose a new method based on Faster R-CNN with Domain Adaptation (DA) to improve the vehicle detection at nighttime. With the assistance of DA, the domain distribution discrepancy of Source and Target Domains is reduced. We collected a new dataset of 2,200 traffic images (1,200 for daytime and 1,000 for nighttime) of 57,059 vehicles for training and testing CNN. In the experiment, only using the manually labeled ground truths of daytime data, Faster R- CNN obtained 82.84% as F-measure on the nighttime vehicle detection, while the proposed method (Faster R-CNN+DA) achieved 86.39% as F-measure on the nighttime vehicle detection

    SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from Traffic Monitoring Cameras

    Full text link
    Intelligent transportation systems (ITS) have revolutionized modern road infrastructure, providing essential functionalities such as traffic monitoring, road safety assessment, congestion reduction, and law enforcement. Effective vehicle detection and accurate vehicle pose estimation are crucial for ITS, particularly using monocular cameras installed on the road infrastructure. One fundamental challenge in vision-based vehicle monitoring is keypoint detection, which involves identifying and localizing specific points on vehicles (such as headlights, wheels, taillights, etc.). However, this task is complicated by vehicle model and shape variations, occlusion, weather, and lighting conditions. Furthermore, existing traffic perception datasets for keypoint detection predominantly focus on frontal views from ego vehicle-mounted sensors, limiting their usability in traffic monitoring. To address these issues, we propose SKoPe3D, a unique synthetic vehicle keypoint dataset generated using the CARLA simulator from a roadside perspective. This comprehensive dataset includes generated images with bounding boxes, tracking IDs, and 33 keypoints for each vehicle. Spanning over 25k images across 28 scenes, SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints. To demonstrate its utility, we trained a keypoint R-CNN model on our dataset as a baseline and conducted a thorough evaluation. Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data. By leveraging the SKoPe3D dataset, researchers and practitioners can overcome the limitations of existing datasets, enabling advancements in vehicle keypoint detection for ITS.Comment: Accepted to IEEE ITSC 202

    Performance Analysis of YOLO-based Architectures for Vehicle Detection from Traffic Images in Bangladesh

    Full text link
    The task of locating and classifying different types of vehicles has become a vital element in numerous applications of automation and intelligent systems ranging from traffic surveillance to vehicle identification and many more. In recent times, Deep Learning models have been dominating the field of vehicle detection. Yet, Bangladeshi vehicle detection has remained a relatively unexplored area. One of the main goals of vehicle detection is its real-time application, where `You Only Look Once' (YOLO) models have proven to be the most effective architecture. In this work, intending to find the best-suited YOLO architecture for fast and accurate vehicle detection from traffic images in Bangladesh, we have conducted a performance analysis of different variants of the YOLO-based architectures such as YOLOV3, YOLOV5s, and YOLOV5x. The models were trained on a dataset containing 7390 images belonging to 21 types of vehicles comprising samples from the DhakaAI dataset, the Poribohon-BD dataset, and our self-collected images. After thorough quantitative and qualitative analysis, we found the YOLOV5x variant to be the best-suited model, performing better than YOLOv3 and YOLOv5s models respectively by 7 & 4 percent in mAP, and 12 & 8.5 percent in terms of Accuracy.Comment: Accepted in 25th ICCIT (6 pages, 5 figures, 1 table

    Incident and Traffic-Bottleneck Detection Algorithm in High-Resolution Remote Sensing Imagery

    Get PDF
    One  of  the  most  important  methods  to  solve  traffic  congestion  is  to detect the incident state of a roadway. This paper describes the development of a method  for  road  traffic  monitoring  aimed  at  the  acquisition  and  analysis  of remote  sensing  imagery.  We  propose  a  strategy  for  road  extraction,  vehicle detection  and incident detection  from remote sensing imagery using techniques based on neural networks, Radon transform  for angle detection and traffic-flow measurements.  Traffic-bottleneck  detection  is  another  method  that  is  proposed for recognizing incidents in both offline and real-time mode. Traffic flows and incidents are extracted from aerial images of bottleneck zones. The results show that the proposed approach has a reasonable detection performance compared to other methods. The best performance of the learning system was a detection rate of 87% and a false alarm rate of less than 18% on 45 aerial images of roadways. The performance of the traffic-bottleneck detection  method had a detection rate of 87.5%

    Highway traffic monitoring on medium resolution satellite images

    Get PDF
    International audienceThese last years, earth observation imagery has significantly improved. Public satellites such as WorldView-3 can now produce images with a Ground Sample Distance of 31cm, reaching an equivalent resolution than aerial images. Perhaps more importantly, the revisit frequency has also been greatly enhanced: providers such as Planet can now acquire images of an area on a daily basis. These major improvements are fueled by an increasing demand for frequent objects detection. An application generating a particular interest is vehicle detection. Indeed, vehicle detection can give to public and private actors valuable data such as traffic monitoring and parking occupancy rate estimations. Several datasets, such as DOTA or VehSat, already exist, allowing researchers to train machine learning algorithms to detect vehicles. However, these datasets focus on relatively high definition and expensive aerial and satellite images. In this paper, we will present a method for detecting vehicles on medium resolution satellite images, with a GSD comprised between 1 and 5 meters. This approach can notably be used on Planet images, allowing to monitor traffic of an area on a daily basis

    INTEGRATED LOW LIGHT IMAGE ENHANCEMENT IN TRANSPORTATION SYSTEM

    Get PDF
    Recent Intelligent Transportation System (ITS) focuses on both traffic management and Homeland Security. It involves advance detection systems of all kind but proper analysis of the image data is required for controlling and further processing. It becomes even more difficult when it comes to low light images due to limitation in the image sensor and heavy amount of noise. An ITS supports all levels like (Transport policy level, Traffic control tactical level, Traffic control measure level, Traffic control operation). For this it uses several split systems like Real time passenger information (RTPI), Automatic Number Plate Recognition (ANPR), Variable message signs (VMS), Vehicle to Infrastructure (V2I) and Vehicle to Vehicle (V2V) system. While analyzing critical scenarios, mostly for the development of the application for Vehicle to Infrastructure (V2I) System several cases are taken into consideration. From these cases some are very difficult to analyze due to the visibility of the background as the detail structure is taken into consideration. Here Direct processing of low light images or video frames like day images leads to loss of required data, so an efficient enhancement method is required which gives allowable result for further transformation and analysis with minimal processing. So an Adaptive Enhancement Method is presented here which applies different enhancement methods for day light and low light images separately. For this purpose a combination of image fusion, edge detection filtering and Contourlet transformation is used for low light images; tone level adjustment and low level feature extraction for enhancement of day light images

    Detection of Motorcycle Headlights Using YOLOv5 and HSV

    Get PDF
    "Electronic Traffic Law Enforcement" (ETLE) denotes a mechanism that employs electronic technologies to implement traffic regulations. This commonly entails utilizing a range of electronic apparatuses like cameras, sensors, and automated setups to oversee and uphold traffic protocols, administer fines, and enhance road security. ETLE systems are frequently utilized for identifying and sanctioning infractions like exceeding speed limits, disregarding red lights, and turning off the headlights. In Indonesia, there is currently no dedicated system designed to detect traffic violation, especially regarding vehicle headlights. Therefore, this research was conducted to detect vehicle headlights using digital images. With the results of this study, it will be possible to develop a system capable of classifying whether vehicle headlights are on or off. This research employed the deep learning method in the form of the YOLOv5 model, which achieved an accuracy of 94.12% in detecting vehicle images. Furthermore, the white color extraction method was performed by projecting the RGB space to HSV to detect the Region of Interest (ROI) of the vehicle headlights, achieving an accuracy of 73.76%. The results of this vehicle headlight detection are influenced by factors such as lighting, image capture angle, and vehicle type

    Real Time Object Detection, Tracking, and Distance and Motion Estimation based on Deep Learning: Application to Smart Mobility

    Get PDF
    International audienceIn this paper, we will introduce our object detection, localization and tracking system for smart mobility applications like traffic road and railway environment. Firstly, an object detection and tracking approach was firstly carried out within two deep learning approaches: You Only Look Once (YOLO) V3 and Single Shot Detector (SSD). A comparison between the two methods allows us to identify their applicability in the traffic environment. Both the performances in road and in railway environments were evaluated. Secondly, object distance estimation based on Monodepth algorithm was developed. This model is trained on stereo images dataset but its inference uses monocular images. As the output data, we have a disparity map that we combine with the output of object detection. To validate our approach, we have tested two models with different backbones including VGG and ResNet used with two datasets : Cityscape and KITTI. As the last step of our approach, we have developed a new method-based SSD to analyse the behavior of pedestrian and vehicle by tracking their movements even in case of no detection on some images of a sequence. We have developed an algorithm based on the coordinates of the output bounding boxes of the SSD algorithm. The objective is to determine if the trajectory of a pedestrian or vehicle can lead to a dangerous situations. The whole of development is tested in real vehicle traffic conditions in Rouen city center, and with videos taken by embedded cameras along the Rouen tramway
    corecore