3,080 research outputs found
Domain Adaptation based Enhanced Detection for Autonomous Driving in Foggy and Rainy Weather
Typically, object detection methods for autonomous driving that rely on
supervised learning make the assumption of a consistent feature distribution
between the training and testing data, however such assumption may fail in
different weather conditions. Due to the domain gap, a detection model trained
under clear weather may not perform well in foggy and rainy conditions.
Overcoming detection bottlenecks in foggy and rainy weather is a real challenge
for autonomous vehicles deployed in the wild. To bridge the domain gap and
improve the performance of object detectionin foggy and rainy weather, this
paper presents a novel framework for domain-adaptive object detection. The
adaptations at both the image-level and object-level are intended to minimize
the differences in image style and object appearance between domains.
Furthermore, in order to improve the model's performance on challenging
examples, we introduce a novel adversarial gradient reversal layer that
conducts adversarial mining on difficult instances in addition to domain
adaptation. Additionally, we suggest generating an auxiliary domain through
data augmentation to enforce a new domain-level metric regularization.
Experimental findings on public V2V benchmark exhibit a substantial enhancement
in object detection specifically for foggy and rainy driving scenarios.Comment: only change the title of this pape
Lightweight Object Detection Ensemble Framework for Autonomous Vehicles in Challenging Weather Conditions
The computer vision systems driving autonomous vehicles are judged by their ability to detect objects and obstacles in the vicinity of the vehicle in diverse environments. Enhancing this ability of a self-driving car to distinguish between the elements of its environment under adverse conditions is an important challenge in computer vision. For example, poor weather conditions like fog and rain lead to image corruption which can cause a drastic drop in object detection (OD) performance. The primary navigation of autonomous vehicles depends on the effectiveness of the image processing techniques applied to the data collected from various visual sensors. Therefore, it is essential to develop the capability to detect objects like vehicles and pedestrians under challenging conditions such as like unpleasant weather. Ensembling multiple baseline deep learning models under different voting strategies for object detection and utilizing data augmentation to boost the models' performance is proposed to solve this problem. The data augmentation technique is particularly useful and works with limited training data for OD applications. Furthermore, using the baseline models significantly speeds up the OD process as compared to the custom models due to transfer learning. Therefore, the ensembling approach can be highly effective in resource-constrained devices deployed for autonomous vehicles in uncertain weather conditions. The applied techniques demonstrated an increase in accuracy over the baseline models and were able to identify objects from the images captured in the adverse foggy and rainy weather conditions. The applied techniques demonstrated an increase in accuracy over the baseline models and reached 32.75% mean average precision (mAP) and 52.56% average precision (AP) in detecting cars in the adverse fog and rain weather conditions present in the dataset. The effectiveness of multiple voting strategies for bounding box predictions on the dataset is also demonstrated. These strategies help increase the explainability of object detection in autonomous systems and improve the performance of the ensemble techniques over the baseline models
Autonomous real-time surveillance system with distributed IP cameras
An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image
processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects
moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator
- …