8 research outputs found

    Detection of Unfocused Raindrops on a Windscreen using Low Level Image Processing

    No full text
    International audienceIn a scene, rain produces a complex set of visual effects. Obviously, such effects may infer failures in outdoor vision-based systems which could have important side-effects in terms of security applications. For the sake of these applications, rain detection would be useful to adjust their reliability. In this paper, we introduce the problem (almost unprecedented) of unfocused raindrops. Then, we present a first approach to detect these unfocused raindrops on a transparent screen using a spatio-temporal approach to achieve detection in real-time. We successfully tested our algorithm for Intelligent Transport System (ITS) using an on-board camera and thus, detected the raindrops on the windscreen. Our algorithm differs from the others in that we do not need the focus to be set on the windscreen. Therefore, it means that our algorithm may run on the same camera sensor as the other vision-based algorithms

    Real-Time Raindrop Detection Based on Deep Learning Algorithm

    Get PDF
    The goal of this research is to develop an in-vehicle computerized system able to detect the raindrops on windshield and warn the driver and start the windscreen wiper in order to avoid that computer vision to acquire blurred images. This feature is important in order to develop Advanced Driver Assistance System based on computer vision. The system should be able specific scenarios that do not allow the ADAS computer vision feature to work properly. Rain drop detection will allow a more reliable Advanced Driver Assistance System

    Automatically generated interactive weather reports based on webcam images

    Get PDF
    Most weather reports are either based on data from dedicated weather stations, satellite images, manual measurements or forecasts. In this paper a system that automatically generates weather reports using the contents on webcam images are proposed. There are thousands of openly available webcams on the Internet that provide images in real time. A webcam image can reveal much about the weather conditions at a particular site and this study demonstrates a strategy for automatically classifying a webcam scene into cloudy, partially cloudy, sunny, foggy and night. The system has been run for several months collecting 60 Gb of image data from webcams across the world. The reports are available through an interactive web-based interface. A selection of benchmark images was manually tagged to assess the accuracy of the weather classification which reached a success rate of 67.3%

    Active Discriminative Dictionary Learning for Weather Recognition

    Get PDF
    Weather recognition based on outdoor images is a brand-new and challenging subject, which is widely required in many fields. This paper presents a novel framework for recognizing different weather conditions. Compared with other algorithms, the proposed method possesses the following advantages. Firstly, our method extracts both visual appearance features of the sky region and physical characteristics features of the nonsky region in images. Thus, the extracted features are more comprehensive than some of the existing methods in which only the features of sky region are considered. Secondly, unlike other methods which used the traditional classifiers (e.g., SVM and K-NN), we use discriminative dictionary learning as the classification model for weather, which could address the limitations of previous works. Moreover, the active learning procedure is introduced into dictionary learning to avoid requiring a large number of labeled samples to train the classification model for achieving good performance of weather recognition. Experiments and comparisons are performed on two datasets to verify the effectiveness of the proposed method

    Crowdsourcing Methods for Data Collection in Geophysics: State of the Art, Issues, and Future Directions

    Get PDF
    Data are essential in all areas of geophysics. They are used to better understand and manage systems, either directly or via models. Given the complexity and spatiotemporal variability of geophysical systems (e.g., precipitation), a lack of sufficient data is a perennial problem, which is exacerbated by various drivers, such as climate change and urbanization. In recent years, crowdsourcing has become increasingly prominent as a means of supplementing data obtained from more traditional sources, particularly due to its relatively low implementation cost and ability to increase the spatial and/or temporal resolution of data significantly. Given the proliferation of different crowdsourcing methods in geophysics and the promise they have shown, it is timely to assess the state‐of‐the‐art in this field, to identify potential issues and map out a way forward. In this paper, crowdsourcing‐based data acquisition methods that have been used in seven domains of geophysics, including weather, precipitation, air pollution, geography, ecology, surface water and natural hazard management are discussed based on a review of 162 papers. In addition, a novel framework for categorizing these methods is introduced and applied to the methods used in the seven domains of geophysics considered in this review. This paper also features a review of 93 papers dealing with issues that are common to data acquisition methods in different domains of geophysics, including the management of crowdsourcing projects, data quality, data processing and data privacy. In each of these areas, the current status is discussed and challenges and future directions are outlined

    Influence of Rain on Vision-Based Algorithms in the Automotive Domain

    Full text link
    The Automotive domain is a highly regulated domain with stringent requirements that characterize automotive systems’ performance and safety. Automotive applications are required to operate under all driving conditions and meet high levels of safety standards. Vision-based systems in the automotive domain are accordingly required to operate at all weather conditions, favorable or adverse. Rain is one of the most common types of adverse weather conditions that reduce quality images used in vision-based algorithms. Rain can be observed in an image in two forms, falling rain streaks or adherent raindrops. Both forms corrupt the input images and degrade the performance of vision-based algorithms. This dissertation describes the work we did to study the effect of rain on the quality images and the target vision systems that use them as the main input. To study falling rain, we developed a framework for simulating failing rain streaks. We also developed a de-raining algorithm that detects and removes rain streaks from the images. We studied the relation between image degradation due to adherent raindrops and the performance of the target vision algorithm and provided quantitive metrics to describe such a relation. We developed an adherent raindrop simulator that generates synthetic rained images, by adding generated raindrops to rain-free images. We used this simulator to generate rained image datasets, which we used to train some vision algorithms and evaluate the feasibility of using transfer-learning to improve DNN-based vision algorithms to improve performance under rainy conditions.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/170924/1/Yazan Hamzeh final dissertation.pdfDescription of Yazan Hamzeh final dissertation.pdf : Dissertatio

    A Robust Object Detection System for Driverless Vehicles through Sensor Fusion and Artificial Intelligence Techniques

    Get PDF
    Since the early 1990s, various research domains have been concerned with the concept of autonomous driving, leading to the widespread implementation of numerous advanced driver assistance features. However, fully automated vehicles have not yet been introduced to the market. The process of autonomous driving can be outlined through the following stages: environment perception, ego-vehicle localization, trajectory estimation, path planning, and vehicle control. Environment perception is partially based on computer vision algorithms that can detect and track surrounding objects. The process of objects detection performed by autonomous vehicles is considered challenging for several reasons, such as the presence of multiple dynamic objects in the same scene, interaction between objects, real-time speed requirements, and the presence of diverse weather conditions (e.g., rain, snow, fog, etc.). Although many studies have been conducted on objects detection performed by autonomous vehicles, it remains a challenging task, and improving the performance of object detection in diverse driving scenes is an ongoing field. This thesis aims to develop novel methods for the detection and 3D localization of surrounding dynamic objects in driving scenes in different rainy weather conditions. In this thesis, firstly, owing to the frequent occurrence of rain and its negative effect on the performance of objects detection operation, a real-time lightweight deraining network is proposed; it works on single real-time images separately. Rain streaks and the accumulation of rain streaks introduce distinct visual degradation effects to captured images. The proposed deraining network effectively removes both rain streaks and accumulated rain streaks from images. It makes use of the progressive operation of two main stages: rain streaks removal and rain streaks accumulation removal. The rain streaks removal stage is based on a Residual Network (ResNet) to maintain real-time performance and avoid adding to the computational complexity. Furthermore, the application of recursive computations involves the sharing of network parameters. Meanwhile, distant rain streaks accumulate and induce a distortion similar to fogging. Thus, it could be mitigated in a way similar to defogging. This stage relies on a transmission-guided lightweight network (TGL-Net). The proposed deraining network was evaluated on five datasets having synthetic rain of different properties and two other datasets with real rainy scenes. Secondly, an emphasis has been put on proposing a novel sensory system that achieves realtime multiple dynamic objects detection in driving scenes. The proposed sensory system utilizes a monocular camera and a 2D Light Detection and Ranging (LiDAR) sensor in a complementary fusion approach. YOLOv3- a baseline real-time object detection algorithm has been used to detect and classify objects in images captured by the camera; detected objects are surrounded by bounding boxes to localize them within the frames. Since objects present in a driving scene are dynamic and usually occluding each other, an algorithm has been developed to differentiate objects whose bounding boxes are overlapping. Moreover, the locations of bounding boxes within frames (in pixels) are converted into real-world angular coordinates. A 2D LiDAR was used to obtain depth measurements while maintaining low computational requirements in order to save resources for other autonomous driving related operations. A novel technique has been developed and tested for processing and mapping 2D LiDAR measurements with corresponding bounding boxes. The detection accuracy of the proposed system was manually evaluated in different real-time scenarios. Finally, the effectiveness of the proposed deraining network was validated in terms of its impact on objects detection in the context of de-rained images. Results of the proposed deraining network were compared to existing baseline deraining networks and have shown that the running time of the proposed network is 2.23× faster than the average running time of baseline deraining networks while achieving 1.2× improvement when tested on different synthetic datasets. Moreover, tests on the LiDAR measurements showed an average error of ±0.04m in real driving scenes. Also, both deraining and objects detection are jointly tested, and it was demonstrated that performing deraining ahead of objects detection caused 1.45× enhancement in the object detection precision

    Restauration d'images par temps de brouillard et de pluie (applications aux aides à la conduite)

    Get PDF
    Les systèmes d'aide à la conduite (ADAS) ont pour objectif d'assister le conducteur et en particulier d'améliorer la sécurité routière. Pour cela, différents capteurs sont généralement embarqués dans les véhicules afin, par exemple, d'avertir le conducteur en cas de danger présent sur la route. L'utilisation de capteurs de type caméra est une solution économiquement avantageuse et de nombreux ADAS à base de caméra voient le jour. Malheureusement, les performances de tels systèmes se dégradent en présence de conditions météorologiques défavorables, notamment en présence de brouillard ou de pluie, ce qui obligerait à les désactiver temporairement par crainte de résultats erronés. Hors, c'est précisément dans ces conditions difficiles que le conducteur aurait potentiellement le plus besoin d'être assisté. Une fois les conditions météorologiques détectées et caractérisées par vision embarquée, nous proposons dans cette thèse de restaurer l'image dégradée à la sortie du capteur afin de fournir aux ADAS un signal de meilleure qualité et donc d'étendre la gamme de fonctionnement de ces systèmes. Dans l'état de l'art, il existe plusieurs approches traitant la restauration d'images, parmi lesquelles certaines sont dédiées à nos problématiques de brouillard ou de pluie, et d'autres sont plus générales : débruitage, rehaussement du contraste ou de la couleur, "inpainting"... Nous proposons dans cette thèse de combiner les deux familles d'approches. Dans le cas du brouillard notre contribution est de tirer profit de deux types d'approches (physique et signal) afin de proposer une nouvelle méthode automatique et adaptée au cas d'images routières. Nous avons évalué notre méthode à l'aide de critères ad hoc (courbes ROC, MSE, contraste visibles à 5 %, évaluation sur ADAS) appliqués sur des bases de données d'images de synthèse et réelles. Dans le cas de la pluie, une fois les gouttes présentes sur le pare-brise détectées, nous reconstituons les parties masquées de l'image à l'aide d'une méthode d'"inpainting" fondée sur les équations aux dérivées partielles. Les paramètres de la méthode ont été optimisés sur des images routières. Enfin, nous montrons qu'il est possible grâce à cette approche de construire trois types d'applications : prétraitement, traitement et assistance. Dans chaque famille, nous avons proposé et évalué une application spécifique : détection des panneaux dans le brouillard ; détection de l'espace navigable dans le brouillard ; affichage de l'image restaurée au conducteur.Advanced Driver Assistance Systems (ADAS) are designed to assist the driver and in particular to improve road safety. For this purpose, various sensors are typically embedded in vehicles in order, for example, to alert the driver in case of imminent danger on the road. The use of camera type of sensor is a cost-effective solution and many ADAS based on camera are being created. Unfortunately, the performance of such systems decrease drastically in the presence of adverse weather conditions, especially in the presence of fog or rain, which could oblige to turn off the systems temporarily in order to avoid erroneous results. While, it is precisely in these difficult circumstances that the driver would potentially need the most to be assisted. Once the weather conditions detected and characterized by embedded vision, we propose in this thesis to restore the degraded image to provide a better signal to the ADAS and thus extend the operation range of these systems. In the state of the art, there are several approaches dealing with images restoration, some of which are dedicated to our fog and rain problem and others are more general : denoising, contrast or color enhancement, inpainting... We propose in this work to combine the two families of approaches. In the case of fog our contribution is to take advantage of both approaches (physical and signal) to propose a new automatic method adapted to the case of road images. We evaluated our method using ad hoc criteria (ROC curves, visible contrast to 5%, assessment on ADAS) applied to databases of synthetic and real images. In case of rain, once the drops present on the windshield are detected, we reconstruct the hidden parts of the image using a method of inpainting based on partial differential equations. The method parameters have been optimized on road images. Finally, we show that it is possible with this approach to build three types of applications : preprocessing, processing and assistance. In every family, we have proposed and evaluated a specific application : traffic signs detection during foggy weather; detection of free space in fog conditions and display of the restored image to the driver.EVRY-Bib. électronique (912289901) / SudocSudocFranceF
    corecore