142 research outputs found

    Blue Channel and Fusion for Sandstorm Image Enhancement

    Get PDF

    Multi-Scale Fusion of Enhanced Hazy Images Using Particle Swarm Optimization and Fuzzy Intensification Operators

    Get PDF
    Dehazing from a single image is still a challenging task, where the thickness of the haze depends on depth information. Researchers focus on this area by eliminating haze from the single image by using restoration techniques based on haze image model. Using haze image model, the haze is eliminated by estimating atmospheric light, transmission, and depth. A few researchers have focused on enhancement based methods for eliminating haze from images. Enhancement based dehazing algorithms will lead to saturation of pixels in the enhanced image. This is due to assigning fixed values to the parameters used to enhance an image. Therefore, the enhancement based methods fail in the proper tuning of the parameters. This can be overcome by optimizing the parameters that are used to enhance the images. This paper describes the research work carried to derive two enhanced images from a single input hazy image using particle swarm optimization and fuzzy intensification operators. The two derived images are further fused using multi-scale fusion technique. The objective evaluation shows that the entropy of the haze eliminated images is comparatively better than the state-of-the-art algorithms. Also, the fog density is measured using an evaluator known as fog aware density evaluator (FADE), which considers all the statistical parameters to differentiate a hazy image from a highly visible natural image. Using this evaluator we found that the density of the fog is less in our proposed method when compared with enhancement based algorithms used to eliminate haze from images

    Recent Trends in Video Surveillance System in Dense Environment: - A Review Paper

    Get PDF
    Snow, fog, lightning, torrential rain, and darkness degrade outdoor surveillance footage. The detection, categorization, and event/object recognition capabilities of video surveillance systems in congested environments have attracted considerable interest. Real-time video analysis algorithms in various weather conditions have been enhanced by technology. Other examples include background extraction, the see-through algorithm, deep learning models, CNN for nocturnal incursions, the system for high-quality underwater monitoring utilising optical-wireless video surveillance, LVENet, and edge computing. In the current study, these methodologies improved monitoring efficiency and decreased human error. This study details these video surveillance techniques, platforms, and supplementary materials. After discussing prevalent building and architectural styles briefly, significant system evaluations are presented. This study contrasts current surveillance systems with various methods for real-time video processing under challenging weather conditions in order to provide readers with a thorough understanding of the system. The following research is also highlighted

    Switching GAN-based Image Filters to Improve Perception for Autonomous Driving

    Get PDF
    Autonomous driving holds the potential to increase human productivity, reduce accidents caused by human errors, allow better utilization of roads, reduce traffic accidents and congestion, free up parking space and provide many other advantages. Perception of Autonomous Vehicles (AV) refers to the use of sensors to perceive the world, e.g. using cameras to detect and classify objects. Traffic scene understanding is a key research problem in perception in autonomous driving, and semantic segmentation is a useful method to address this problem. Adverse weather conditions are a reality that AV must contend with. Conditions like rain, snow, haze, etc. can drastically reduce visibility and thus affect computer vision models. Models for perception for AVs are currently designed for and tested on predominantly ideal weather conditions under good illumination. The most complete solution may be to have the segmentation networks be trained on all possible adverse conditions. Thus a dataset to train a segmentation network to make it robust to rain would need to have adequate data that cover these conditions well. Moreover, labeling is an expensive task. It is particularly expensive for semantic segmentation, as each object in a scene needs to be identified and each pixel annotated in the right class. Thus, the adverse weather is a challenging problem for perception models in AVs. This thesis explores the use of Generative Adversarial Networks (GAN) in order to improve semantic segmentation. We design a framework and a methodology to evaluate the proposed approach. The framework consists of an Adversity Detector, and a series of denoising filters. The Adversity Detector is an image classifier that takes as input clear weather or adverse weather scenes, and attempts to predict whether the given image contains rain, or puddles, or other conditions that can adversely affect semantic segmentation. The filters are denoising generative adversarial networks that are trained to remove the adverse conditions from images in order to translate the image to a domain the segmentation network has been trained on, i.e. clear weather images. We use the prediction from the Adversity Detector to choose which GAN filter to use. The methodology we devise for evaluating our approach uses the trained filters to output sets of images that we can then run segmentation tasks on. This, we argue, is a better metric for evaluating the GANs than similarity measures such as SSIM. We also use synthetic data so we can perform systematic evaluation of our technique. We train two kinds of GANs, one that uses paired data (CycleGAN), and one that does not (Pix2Pix). We have concluded that GAN architectures that use unpaired data are not sufficiently good models for denoising. We train the denoising filters using the other architecture and we found them easy to train, and they show good results. While these filters do not show better performance than when we train our segmentation network with adverse weather data, we refer back to the point that training the segmentation network requires labelled data which is expensive to collect and annotate, particularly for adverse weather and lighting conditions. We implement our proposed framework and report a 17\% increase in performance in segmentation over the baseline results obtained when we do not use our framework

    A Robust Object Detection System for Driverless Vehicles through Sensor Fusion and Artificial Intelligence Techniques

    Get PDF
    Since the early 1990s, various research domains have been concerned with the concept of autonomous driving, leading to the widespread implementation of numerous advanced driver assistance features. However, fully automated vehicles have not yet been introduced to the market. The process of autonomous driving can be outlined through the following stages: environment perception, ego-vehicle localization, trajectory estimation, path planning, and vehicle control. Environment perception is partially based on computer vision algorithms that can detect and track surrounding objects. The process of objects detection performed by autonomous vehicles is considered challenging for several reasons, such as the presence of multiple dynamic objects in the same scene, interaction between objects, real-time speed requirements, and the presence of diverse weather conditions (e.g., rain, snow, fog, etc.). Although many studies have been conducted on objects detection performed by autonomous vehicles, it remains a challenging task, and improving the performance of object detection in diverse driving scenes is an ongoing field. This thesis aims to develop novel methods for the detection and 3D localization of surrounding dynamic objects in driving scenes in different rainy weather conditions. In this thesis, firstly, owing to the frequent occurrence of rain and its negative effect on the performance of objects detection operation, a real-time lightweight deraining network is proposed; it works on single real-time images separately. Rain streaks and the accumulation of rain streaks introduce distinct visual degradation effects to captured images. The proposed deraining network effectively removes both rain streaks and accumulated rain streaks from images. It makes use of the progressive operation of two main stages: rain streaks removal and rain streaks accumulation removal. The rain streaks removal stage is based on a Residual Network (ResNet) to maintain real-time performance and avoid adding to the computational complexity. Furthermore, the application of recursive computations involves the sharing of network parameters. Meanwhile, distant rain streaks accumulate and induce a distortion similar to fogging. Thus, it could be mitigated in a way similar to defogging. This stage relies on a transmission-guided lightweight network (TGL-Net). The proposed deraining network was evaluated on five datasets having synthetic rain of different properties and two other datasets with real rainy scenes. Secondly, an emphasis has been put on proposing a novel sensory system that achieves realtime multiple dynamic objects detection in driving scenes. The proposed sensory system utilizes a monocular camera and a 2D Light Detection and Ranging (LiDAR) sensor in a complementary fusion approach. YOLOv3- a baseline real-time object detection algorithm has been used to detect and classify objects in images captured by the camera; detected objects are surrounded by bounding boxes to localize them within the frames. Since objects present in a driving scene are dynamic and usually occluding each other, an algorithm has been developed to differentiate objects whose bounding boxes are overlapping. Moreover, the locations of bounding boxes within frames (in pixels) are converted into real-world angular coordinates. A 2D LiDAR was used to obtain depth measurements while maintaining low computational requirements in order to save resources for other autonomous driving related operations. A novel technique has been developed and tested for processing and mapping 2D LiDAR measurements with corresponding bounding boxes. The detection accuracy of the proposed system was manually evaluated in different real-time scenarios. Finally, the effectiveness of the proposed deraining network was validated in terms of its impact on objects detection in the context of de-rained images. Results of the proposed deraining network were compared to existing baseline deraining networks and have shown that the running time of the proposed network is 2.23× faster than the average running time of baseline deraining networks while achieving 1.2× improvement when tested on different synthetic datasets. Moreover, tests on the LiDAR measurements showed an average error of ±0.04m in real driving scenes. Also, both deraining and objects detection are jointly tested, and it was demonstrated that performing deraining ahead of objects detection caused 1.45× enhancement in the object detection precision

    Intelligent Transportation Related Complex Systems and Sensors

    Get PDF
    Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITS) are being widely adopted worldwide to improve the efficiency and safety of the transportation system. They enable users to be better informed and make safer, more coordinated, and smarter decisions on the use of transport networks. Current ITSs are complex systems, made up of several components/sub-systems characterized by time-dependent interactions among themselves. Some examples of these transportation-related complex systems include: road traffic sensors, autonomous/automated cars, smart cities, smart sensors, virtual sensors, traffic control systems, smart roads, logistics systems, smart mobility systems, and many others that are emerging from niche areas. The efficient operation of these complex systems requires: i) efficient solutions to the issues of sensors/actuators used to capture and control the physical parameters of these systems, as well as the quality of data collected from these systems; ii) tackling complexities using simulations and analytical modelling techniques; and iii) applying optimization techniques to improve the performance of these systems. It includes twenty-four papers, which cover scientific concepts, frameworks, architectures and various other ideas on analytics, trends and applications of transportation-related data

    A Deep Learning Approach for Spatiotemporal-Data-Driven Traffic State Estimation

    Get PDF
    The past decade witnessed rapid developments in traffic data sensing technologies in the form of roadside detector hardware, vehicle on-board units, and pedestrian wearable devices. The growing magnitude and complexity of the available traffic data has fueled the demand for data-driven models that can handle large scale inputs. In the recent past, deep-learning-powered algorithms have become the state-of-the-art for various data-driven applications. In this research, three applications of deep learning algorithms for traffic state estimation were investigated. Firstly, network-wide traffic parameters estimation was explored. An attention-based multi-encoder-decoder (Att-MED) neural network architecture was proposed and trained to predict freeway traffic speed up to 60 minutes ahead. Att-MED was designed to encode multiple traffic input sequences: short-term, daily, and weekly cyclic behavior. The proposed network produced an average prediction accuracy of 97.5%, which was superior to the compared baseline models. In addition to improving the output performance, the model\u27s attention weights enhanced the model interpretability. This research additionally explored the utility of low-penetration connected probe-vehicle data for network-wide traffic parameters estimation and prediction on freeways. A novel sequence-to-sequence recurrent graph networks (Seq2Se2 GCN-LSTM) was designed. It was then trained to estimate and predict traffic volume and speed for a 60-minute future time horizon. The proposed methodology generated volume and speed predictions with an average accuracy of 90.5% and 96.6%, respectively, outperforming the investigated baseline models. The proposed method demonstrated robustness against perturbations caused by the probe vehicle fleet\u27s low penetration rate. Secondly, the application of deep learning for road weather detection using roadside CCTVs were investigated. A Vision Transformer (ViT) was trained for simultaneous rain and road surface condition classification. Next, a Spatial Self-Attention (SSA) network was designed to consume the individual detection results, interpret the spatial context, and modify the collective detection output accordingly. The sequential module improved the accuracy of the stand-alone Vision Transformer as measured by the F1-score, raising the total accuracy for both tasks to 96.71% and 98.07%, respectively. Thirdly, a real-time video-based traffic incident detection algorithm was developed to enhance the utilization of the existing roadside CCTV network. The methodology automatically identified the main road regions in video scenes and investigated static vehicles around those areas. The developed algorithm was evaluated using a dataset of roadside videos. The incidents were detected with 85.71% sensitivity and 11.10% false alarm rate with an average delay of 27.53 seconds. In general, the research proposed in this dissertation maximizes the utility of pre-existing traffic infrastructure and emerging probe traffic data. It additionally demonstrated deep learning algorithms\u27 capability of modeling complex spatiotemporal traffic data. This research illustrates that advances in the deep learning field continue to have a high applicability potential in the traffic state estimation domain
    • …
    corecore