16 research outputs found

    Detection of Unfocused Raindrops on a Windscreen using Low Level Image Processing

    No full text
    International audienceIn a scene, rain produces a complex set of visual effects. Obviously, such effects may infer failures in outdoor vision-based systems which could have important side-effects in terms of security applications. For the sake of these applications, rain detection would be useful to adjust their reliability. In this paper, we introduce the problem (almost unprecedented) of unfocused raindrops. Then, we present a first approach to detect these unfocused raindrops on a transparent screen using a spatio-temporal approach to achieve detection in real-time. We successfully tested our algorithm for Intelligent Transport System (ITS) using an on-board camera and thus, detected the raindrops on the windscreen. Our algorithm differs from the others in that we do not need the focus to be set on the windscreen. Therefore, it means that our algorithm may run on the same camera sensor as the other vision-based algorithms

    DETECTION OF UNFOCUSED RAINDROPS ON CAR WINDSCREEN COMPARATIVE ANALYSIS USING BACKGROUND SUBRACTIONAND AND WATERSHED ALGORTIHM

    Get PDF
    Use of ADAS in top end cars has been prevalent over past decade. Electronic control and assistance in cars has proven to be a major feature resulting in passenger safety, saving lives as well as preventing fatalities. This system can be trusted or counted upon in clear weather conditions, which by now has been the only limitation questioning the usefulness of ADAS. Current research focuses to strengthen ADAS in rainy climatic conditions. This paper puts forth a novel idea to detect raindrops where ADAS can be used to increase its functionality in rainy condition to control the speed of over-speeding cars. The method basically includes image database on which Background Subtraction and Watershed algorithm are run to find out a numerical data, and to measure performance of both the method. This data can be used to improve ADAS performance in rainy conditions

    Real-Time Raindrop Detection Based on Deep Learning Algorithm

    Get PDF
    The goal of this research is to develop an in-vehicle computerized system able to detect the raindrops on windshield and warn the driver and start the windscreen wiper in order to avoid that computer vision to acquire blurred images. This feature is important in order to develop Advanced Driver Assistance System based on computer vision. The system should be able specific scenarios that do not allow the ADAS computer vision feature to work properly. Rain drop detection will allow a more reliable Advanced Driver Assistance System

    Automatically generated interactive weather reports based on webcam images

    Get PDF
    Most weather reports are either based on data from dedicated weather stations, satellite images, manual measurements or forecasts. In this paper a system that automatically generates weather reports using the contents on webcam images are proposed. There are thousands of openly available webcams on the Internet that provide images in real time. A webcam image can reveal much about the weather conditions at a particular site and this study demonstrates a strategy for automatically classifying a webcam scene into cloudy, partially cloudy, sunny, foggy and night. The system has been run for several months collecting 60 Gb of image data from webcams across the world. The reports are available through an interactive web-based interface. A selection of benchmark images was manually tagged to assess the accuracy of the weather classification which reached a success rate of 67.3%

    A complete system to determine the speed limit by fusing a GIS and a camera

    Get PDF
    International audienceDetermining the speed limit on road is a complex task based on the Highway Code and the detection of temporary speed limits. In our system, these two aspects are managed by a GIS (Geographical Information System) and a camera respectively. The vision-based system aims at detecting the roadsigns as well as the subsigns and the lane markings to filter those applicable. The two sources of information are finally fused by using the Belief Theory to select the correct speed limit. The performance of a navigation-based system is increased by 19%

    Influence of Rain on Vision-Based Algorithms in the Automotive Domain

    Full text link
    The Automotive domain is a highly regulated domain with stringent requirements that characterize automotive systems’ performance and safety. Automotive applications are required to operate under all driving conditions and meet high levels of safety standards. Vision-based systems in the automotive domain are accordingly required to operate at all weather conditions, favorable or adverse. Rain is one of the most common types of adverse weather conditions that reduce quality images used in vision-based algorithms. Rain can be observed in an image in two forms, falling rain streaks or adherent raindrops. Both forms corrupt the input images and degrade the performance of vision-based algorithms. This dissertation describes the work we did to study the effect of rain on the quality images and the target vision systems that use them as the main input. To study falling rain, we developed a framework for simulating failing rain streaks. We also developed a de-raining algorithm that detects and removes rain streaks from the images. We studied the relation between image degradation due to adherent raindrops and the performance of the target vision algorithm and provided quantitive metrics to describe such a relation. We developed an adherent raindrop simulator that generates synthetic rained images, by adding generated raindrops to rain-free images. We used this simulator to generate rained image datasets, which we used to train some vision algorithms and evaluate the feasibility of using transfer-learning to improve DNN-based vision algorithms to improve performance under rainy conditions.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/170924/1/Yazan Hamzeh final dissertation.pdfDescription of Yazan Hamzeh final dissertation.pdf : Dissertatio

    Vision for Scene Understanding

    Get PDF
    This manuscript covers my recent research on vision algorithms for scene understanding, articulated in 3 research axes: 3D Vision, Weakly supervised vision, and Vision and physics. At the core of the most recent works is weakly-supervised learning and physics-embodied vision, which address short comings of supervised learning that requires large amount of data. The use of more physically grounded algorithms appears evidently beneficial as both robots and humans naturally evolve in a 3D physical world. On the other hand, accounting for physics knowledge reflects important cue about lighting and weather conditions of the scene central in my work. Physics-informed machine learning is not only beneficial for increased interpretability but also to compensate labels and data scarcity

    A Deep Learning approach for monitoring severe rainfall in urban catchments using consumer cameras. Models development and deployment on a case study in Matera (Italy) Un approccio basato sul Deep Learning per monitorare le piogge intense nei bacini urbani utilizzando fotocamere generiche. Sviluppo e implementazione di modelli su un caso di studio a Matera (Italia)

    Get PDF
    In the last 50 years, flooding has figured as the most frequent and widespread natural disaster globally. Extreme precipitation events stemming from climate change could alter the hydro-geological regime resulting in increased flood risk. Near real-time precipitation monitoring at local scale is essential for flood risk mitigation in urban and suburban areas, due to their high vulnerability. Presently, most of the rainfall data is obtained from ground‐based measurements or remote sensing that provide limited information in terms of temporal or spatial resolution. Other problems may be due to the high costs. Furthermore, rain gauges are unevenly spread and usually placed away from urban centers. In this context, a big potential is represented by the use of innovative techniques to develop low-cost monitoring systems. Despite the diversity of purposes, methods and epistemological fields, the literature on the visual effects of the rain supports the idea of camera-based rain sensors but tends to be device-specific. The present thesis aims to investigate the use of easily available photographing devices as rain detectors-gauges to develop a dense network of low-cost rainfall sensors to support the traditional methods with an expeditious solution embeddable into smart devices. As opposed to existing works, the study focuses on maximizing the number of image sources (like smartphones, general-purpose surveillance cameras, dashboard cameras, webcams, digital cameras, etc.). This encompasses cases where it is not possible to adjust the camera parameters or obtain shots in timelines or videos. Using a Deep Learning approach, the rainfall characterization can be achieved through the analysis of the perceptual aspects that determine whether and how a photograph represents a rainy condition. The first scenario of interest for the supervised learning was a binary classification; the binary output (presence or absence of rain) allows the detection of the presence of precipitation: the cameras act as rain detectors. Similarly, the second scenario of interest was a multi-class classification; the multi-class output described a range of quasi-instantaneous rainfall intensity: the cameras act as rain estimators. Using Transfer Learning with Convolutional Neural Networks, the developed models were compiled, trained, validated, and tested. The preparation of the classifiers included the preparation of a suitable dataset encompassing unconstrained verisimilar settings: open data, several data owned by National Research Institute for Earth Science and Disaster Prevention - NIED (dashboard cameras in Japan coupled with high precision multi-parameter radar data), and experimental activities conducted in the NIED Large Scale Rainfall Simulator. The outcomes were applied to a real-world scenario, with the experimentation through a pre-existent surveillance camera using 5G connectivity provided by Telecom Italia S.p.A. in the city of Matera (Italy). Analysis unfolded on several levels providing an overview of generic issues relating to the urban flood risk paradigm and specific territorial questions inherent with the case study. These include the context aspects, the important role of rainfall from driving the millennial urban evolution to determining present criticality, and components of a Web prototype for flood risk communication at local scale. The results and the model deployment raise the possibility that low‐cost technologies and local capacities can help to retrieve rainfall information for flood early warning systems based on the identification of a significant meteorological state. The binary model reached accuracy and F1 score values of 85.28% and 0.86 for the test, and 83.35% and 0.82 for the deployment. The multi-class model reached test average accuracy and macro-averaged F1 score values of 77.71% and 0.73 for the 6-way classifier, and 78.05% and 0.81 for the 5-class. The best performances were obtained in heavy rainfall and no-rain conditions, whereas the mispredictions are related to less severe precipitation. The proposed method has limited operational requirements, can be easily and quickly implemented in real use cases, exploiting pre-existent devices with a parsimonious use of economic and computational resources. The classification can be performed on single photographs taken in disparate conditions by commonly used acquisition devices, i.e. by static or moving cameras without adjusted parameters. This approach is especially useful in urban areas where measurement methods such as rain gauges encounter installation difficulties or operational limitations or in contexts where there is no availability of remote sensing data. The system does not suit scenes that are also misleading for human visual perception. The approximations inherent in the output are acknowledged. Additional data may be gathered to address gaps that are apparent and improve the accuracy of the precipitation intensity prediction. Future research might explore the integration with further experiments and crowdsourced data, to promote communication, participation, and dialogue among stakeholders and to increase public awareness, emergency response, and civic engagement through the smart community idea.Negli ultimi 50 anni, le alluvioni si sono confermate come il disastro naturale più frequente e diffuso a livello globale. Tra gli impatti degli eventi meteorologici estremi, conseguenti ai cambiamenti climatici, rientrano le alterazioni del regime idrogeologico con conseguente incremento del rischio alluvionale. Il monitoraggio delle precipitazioni in tempo quasi reale su scala locale è essenziale per la mitigazione del rischio di alluvione in ambito urbano e periurbano, aree connotate da un'elevata vulnerabilità. Attualmente, la maggior parte dei dati sulle precipitazioni è ottenuta da misurazioni a terra o telerilevamento che forniscono informazioni limitate in termini di risoluzione temporale o spaziale. Ulteriori problemi possono derivare dagli elevati costi. Inoltre i pluviometri sono distribuiti in modo non uniforme e spesso posizionati piuttosto lontano dai centri urbani, comportando criticità e discontinuità nel monitoraggio. In questo contesto, un grande potenziale è rappresentato dall'utilizzo di tecniche innovative per sviluppare sistemi inediti di monitoraggio a basso costo. Nonostante la diversità di scopi, metodi e campi epistemologici, la letteratura sugli effetti visivi della pioggia supporta l'idea di sensori di pioggia basati su telecamera, ma tende ad essere specifica per dispositivo scelto. La presente tesi punta a indagare l'uso di dispositivi fotografici facilmente reperibili come rilevatori-misuratori di pioggia, per sviluppare una fitta rete di sensori a basso costo a supporto dei metodi tradizionali con una soluzione rapida incorporabile in dispositivi intelligenti. A differenza dei lavori esistenti, lo studio si concentra sulla massimizzazione del numero di fonti di immagini (smartphone, telecamere di sorveglianza generiche, telecamere da cruscotto, webcam, telecamere digitali, ecc.). Ciò comprende casi in cui non sia possibile regolare i parametri fotografici o ottenere scatti in timeline o video. Utilizzando un approccio di Deep Learning, la caratterizzazione delle precipitazioni può essere ottenuta attraverso l'analisi degli aspetti percettivi che determinano se e come una fotografia rappresenti una condizione di pioggia. Il primo scenario di interesse per l'apprendimento supervisionato è una classificazione binaria; l'output binario (presenza o assenza di pioggia) consente la rilevazione della presenza di precipitazione: gli apparecchi fotografici fungono da rivelatori di pioggia. Analogamente, il secondo scenario di interesse è una classificazione multi-classe; l'output multi-classe descrive un intervallo di intensità delle precipitazioni quasi istantanee: le fotocamere fungono da misuratori di pioggia. Utilizzando tecniche di Transfer Learning con reti neurali convoluzionali, i modelli sviluppati sono stati compilati, addestrati, convalidati e testati. La preparazione dei classificatori ha incluso la preparazione di un set di dati adeguato con impostazioni verosimili e non vincolate: dati aperti, diversi dati di proprietà del National Research Institute for Earth Science and Disaster Prevention - NIED (telecamere dashboard in Giappone accoppiate con dati radar multiparametrici ad alta precisione) e attività sperimentali condotte nel simulatore di pioggia su larga scala del NIED. I risultati sono stati applicati a uno scenario reale, con la sperimentazione attraverso una telecamera di sorveglianza preesistente che utilizza la connettività 5G fornita da Telecom Italia S.p.A. nella città di Matera (Italia). L'analisi si è svolta su più livelli, fornendo una panoramica sulle questioni relative al paradigma del rischio di alluvione in ambito urbano e questioni territoriali specifiche inerenti al caso di studio. Queste ultime includono diversi aspetti del contesto, l'importante ruolo delle piogge dal guidare l'evoluzione millenaria della morfologia urbana alla determinazione delle criticità attuali, oltre ad alcune componenti di un prototipo Web per la comunicazione del rischio alluvionale su scala locale. I risultati ottenuti e l'implementazione del modello corroborano la possibilità che le tecnologie a basso costo e le capacità locali possano aiutare a caratterizzare la forzante pluviometrica a supporto dei sistemi di allerta precoce basati sull'identificazione di uno stato meteorologico significativo. Il modello binario ha raggiunto un'accuratezza e un F1-score di 85,28% e 0,86 per il set di test e di 83,35% e 0,82 per l'implementazione nel caso di studio. Il modello multi-classe ha raggiunto un'accuratezza media e F1-score medio (macro-average) di 77,71% e 0,73 per il classificatore a 6 vie e 78,05% e 0,81 per quello a 5 classi. Le prestazioni migliori sono state ottenute nelle classi relative a forti precipitazioni e assenza di pioggia, mentre le previsioni errate sono legate a precipitazioni meno estreme. Il metodo proposto richiede requisiti operativi limitati, può essere implementato facilmente e rapidamente in casi d'uso reali, sfruttando dispositivi preesistenti con un uso parsimonioso di risorse economiche e computazionali. La classificazione può essere eseguita su singole fotografie scattate in condizioni disparate da dispositivi di acquisizione di uso comune, ovvero da telecamere statiche o in movimento senza regolazione dei parametri. Questo approccio potrebbe essere particolarmente utile nelle aree urbane in cui i metodi di misurazione come i pluviometri incontrano difficoltà di installazione o limitazioni operative o in contesti in cui non sono disponibili dati di telerilevamento o radar. Il sistema non si adatta a scene che sono fuorvianti anche per la percezione visiva umana. I limiti attuali risiedono nelle approssimazioni intrinseche negli output. Per colmare le lacune evidenti e migliorare l'accuratezza della previsione dell'intensità di precipitazione, sarebbe possibile un'ulteriore raccolta di dati. Sviluppi futuri potrebbero riguardare l'integrazione con ulteriori esperimenti in campo e dati da crowdsourcing, per promuovere comunicazione, partecipazione e dialogo aumentando la resilienza attraverso consapevolezza pubblica e impegno civico in una concezione di comunità smart

    Wavelet Transforms for Rain and Snow Classification with Commercial Microwave Links: Evaluation Using Real-World Data

    Get PDF
    The need for improved precipitation estimations has prompted the exploration of opportunistic alternatives such as utilizing commercial microwave links (CML), particularly in areas with poor coverage of weather radars and rain gauges. It has been known that rainfall-induced attenuation in the microwave signal can be used to determine rainfall intensity accurately. However, detecting other types of precipitation, such as dry snow, remains a challenge. This study evaluates the feasibility of using wavelet transform combined with a random forest classifier to identify rain and snow events. Real-world signal attenuation data from telecommunication operators and precipitation data from nearby disdrometers in Norway were used to develop the classification methods proposed in this study. The rain classifier was based on data from June 2022, while the snow classifier was evaluated using data from December 2021. The operating frequency of the CMLs used in this study was between 30-40 GHz. The algorithm for rain detection performed similarly to other wet-dry classification methods, with a mean Matthews correlation coefficient (MCC) of 36 % among 52 CMLs. The snow detection algorithm, however, showed no correlation between signal attenuation from 41 CMLs and dry snowfall. In conclusion, the wavelet transforms effectively extract useful information from signal attenuation for rain classification but are unsuitable for detecting snow. Moreover, the study recommends testing commercial microwave links with higher operating frequencies than those used in this study, combined with temperature data, to improve the possibilities of dry snow detection

    Active Discriminative Dictionary Learning for Weather Recognition

    Get PDF
    Weather recognition based on outdoor images is a brand-new and challenging subject, which is widely required in many fields. This paper presents a novel framework for recognizing different weather conditions. Compared with other algorithms, the proposed method possesses the following advantages. Firstly, our method extracts both visual appearance features of the sky region and physical characteristics features of the nonsky region in images. Thus, the extracted features are more comprehensive than some of the existing methods in which only the features of sky region are considered. Secondly, unlike other methods which used the traditional classifiers (e.g., SVM and K-NN), we use discriminative dictionary learning as the classification model for weather, which could address the limitations of previous works. Moreover, the active learning procedure is introduced into dictionary learning to avoid requiring a large number of labeled samples to train the classification model for achieving good performance of weather recognition. Experiments and comparisons are performed on two datasets to verify the effectiveness of the proposed method
    corecore