278 research outputs found

    Rain Rate Estimation with SAR using NEXRAD measurements with Convolutional Neural Networks

    Full text link
    Remote sensing of rainfall events is critical for both operational and scientific needs, including for example weather forecasting, extreme flood mitigation, water cycle monitoring, etc. Ground-based weather radars, such as NOAA's Next-Generation Radar (NEXRAD), provide reflectivity and precipitation measurements of rainfall events. However, the observation range of such radars is limited to a few hundred kilometers, prompting the exploration of other remote sensing methods, paricularly over the open ocean, that represents large areas not covered by land-based radars. For a number of decades, C-band SAR imagery such a such as Sentinel-1 imagery has been known to exhibit rainfall signatures over the sea surface. However, the development of SAR-derived rainfall products remains a challenge. Here we propose a deep learning approach to extract rainfall information from SAR imagery. We demonstrate that a convolutional neural network, such as U-Net, trained on a colocated and preprocessed Sentinel-1/NEXRAD dataset clearly outperforms state-of-the-art filtering schemes. Our results indicate high performance in segmenting precipitation regimes, delineated by thresholds at 1, 3, and 10 mm/h. Compared to current methods that rely on Koch filters to draw binary rainfall maps, these multi-threshold learning-based models can provide rainfall estimation for higher wind speeds and thus may be of great interest for data assimilation weather forecasting or for improving the qualification of SAR-derived wind field data.Comment: 25 pages, 10 figure

    Application of machine learning techniques to weather forecasting

    Get PDF
    Weather forecasting is, still today, a human based activity. Although computer simulations play a major role in modelling the state and evolution of the atmosphere, there is a lack of methodologies to automate the interpretation of the information generated by these models. This doctoral thesis explores the use of machine learning methodologies to solve specific problems in meteorology and particularly focuses on the exploration of methodologies to improve the accuracy of numerical weather prediction models using machine learning. The work presented in this manuscript contains two different approaches using machine learning. In the first part, classical methodologies, such as multivariate non-parametric regression and binary trees are explored to perform regression on meteorological data. In this first part, we particularly focus on forecasting wind, where the circular nature of this variable opens interesting challenges for classic machine learning algorithms and techniques. The second part of this thesis, explores the analysis of weather data as a generic structured prediction problem using deep neural networks. Neural networks, such as convolutional and recurrent networks provide a method for capturing the spatial and temporal structure inherent in weather prediction models. This part explores the potential of deep convolutional neural networks in solving difficult problems in meteorology, such as modelling precipitation from basic numerical model fields. The research performed during the completion of this thesis demonstrates that collaboration between the machine learning and meteorology research communities is mutually beneficial and leads to advances in both disciplines. Weather forecasting models and observational data represent unique examples of large (petabytes), structured and high-quality data sets, that the machine learning community demands for developing the next generation of scalable algorithms

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    3-D Cloud Morphology and Evolution Derived from Hemispheric Stereo Cameras

    Get PDF
    Clouds play a key role in the Earth-atmosphere system as they reflect incoming solar radiation back to space, while absorbing and emitting longwave radiation. A significant challenge for observation and modeling pose cumulus clouds due to their relatively small size that can reach several hundreds up to a few thousand meters, their often complex 3-D shapes and highly dynamic life-cycle. Common instruments employed to study clouds include cloud radars, lidar-ceilometers, (microwave-)radiometers, but also satellite and airborne observations (in-situ and remote), all of which lack either sufficient sensitivity or a spatial or temporal resolution for a comprehensive observation. This thesis investigates the feasibility of a ground-based network of hemispheric stereo cameras to retrieve detailed 3-D cloud geometries, which are needed for validation of simulated cloud fields and parametrization in numerical models. Such camera systems, which offer a hemispheric field of view and a temporal resolution in the range of seconds and less, have the potential to fill the remaining gap of cloud observations to a considerable degree and allow to derive critical information about size, morphology, spatial distribution and life-cycle of individual clouds and the local cloud field. The technical basis for the 3-D cloud morphology retrieval is the stereo reconstruction: a cloud is synchronously recorded by a pair of cameras, which are separated by a few hundred meters, so that mutually visible areas of the cloud can be reconstructed via triangulation. Location and orientation of each camera system was obtained from a satellite-navigation system, detected stars in night sky images and mutually visible cloud features in the images. The image point correspondences required for 3-D triangulation were provided primarily by a dense stereo matching algorithm that allows to reconstruct an object with high degree of spatial completeness, which can improve subsequent analysis. The experimental setup in the vicinity of the JĂŒlich Observatory for Cloud Evolution (JOYCE) included a pair of hemispheric sky cameras; it was later extended by another pair to reconstruct clouds from different view perspectives and both were separated by several kilometers. A comparison of the cloud base height (CBH) at zenith obtained from the stereo cameras and a lidar-ceilometer showed a typical bias of mostly below 2% of the lidar-derived CBH, but also a few occasions between 3-5%. Typical standard deviations of the differences ranged between 50 m (1.5 % of CBH) for altocumulus clouds and between 7% (123 m) and 10% (165 m) for cumulus and strato-cumulus clouds. A comparison of the estimated 3-D cumulus boundary at near-zenith to the sensed 2-D reflectivity profiles from a 35-GHz cloud radar revealed typical differences between 35 - 81 m. For clouds at larger distances (> 2 km) both signals can deviate significantly, which can in part be explained by a lower reconstruction accuracy for the low-contrast areas of a cloud base, but also with the insufficient sensitivity of the cloud radar if the cloud condensate is dominated by very small droplets or diluted with environmental air. For sequences of stereo images, the 3-D cloud reconstructions from the stereo analysis can be combined with the motion and tracking information from an optical flow routine in order to derive 3-D motion and deformation vectors of clouds. This allowed to estimate atmospheric motion in case of cloud layers with an accuracy of 1 ms-1 in velocity and 7° to 10° in direction. The fine-grained motion data was also used to detect and quantify cloud motion patterns of individual cumuli, such as deformations under vertical wind-shear. The potential of the proposed method lies in an extended analysis of life-cycle and morphology of cumulus clouds. This is illustrated in two show cases where developing cumulus clouds were reconstructed from two different view perspectives. In the first case study, a moving cloud was tracked and analyzed, while being subject to vertical wind shear. The highly tilted cloud body was captured and its vertical profile was quantified to obtain measures like vertically resolved diameter or tilting angle. The second case study shows a life-cycle analysis of a developing cumulus, including a time-series of relevant geometric aspects, such as perimeter, vertically projected area, diameter, thickness and further derived statistics like cloud aspect ratio or perimeter scaling. The analysis confirms some aspects of cloud evolution, such as the pulse-like formation of cumulus and indicates that cloud aspect ratio (size vs height) can be described by a power-law functional relationship for an individual life-cycle.Wolken haben einen maßgeblichen Einfluss auf den Strahlungshaushalt der Erde, da sie solare Strahlung effektiv reflektieren, aber von der Erde emittierte langwellige Strahlung sowohl absorbieren als auch ihrerseits wieder emittieren. DarĂŒber hinaus stellen Cumulus-Wolken wegen ihrer verhĂ€ltnismĂ€ĂŸig kleinen Ausdehnung von wenigen hundert bis einigen tausend Metern sowie ihres dynamischen Lebenszyklus nach wie vor eine große Herausforderung fĂŒr Beobachtung und Modellierung dar. GegenwĂ€rtig fĂŒr deren Erforschung im Einsatz befindliche Instrumente wie Lidar-Ceilometer, Wolkenradar, Mikrowellenradiometer oder auch satellitengestĂŒtzte Beobachtungen stellen die fĂŒr eine umfassende Erforschung dieser Wolken erforderliche rĂ€umliche und zeitliche Abdeckung nicht zur VerfĂŒgung. In dieser Arbeit wird untersucht, inwieweit eine bodengebundene Beobachtung von Wolken mit hemisphĂ€risch projizierenden Wolkenkameras geeignet ist detaillierte 3-D Wolkengeometrien zu rekonstruieren um daraus Informationen ĂŒber GrĂ¶ĂŸe, Morphologie und Lebenszyklus einzelner Wolken und des lokalen Wolkenfeldes abzuleiten. Grundlage fĂŒr die Erfassung der 3-D Wolkengeometrien in dieser Arbeit ist die 3-D Stereorekonstruktion, bei der eine Wolke von jeweils zwei im Abstand von mehreren Hundert Metern aufgestellten, synchron aufnehmenden Kameras abgebildet wird. Beidseitig sichtbare Teile einer Wolke können so mittels Triangulation rekonstruiert werden. Fischaugen-Objektive ermöglichen das hemisphĂ€rische Sichtfeld der Wolkenkameras. WĂ€hrend die Positionsbestimmung der Kameras mit Hilfe eines Satelliten-Navigationssystems durchgefĂŒhrt wurde, konnte die absolute Orientierung der Kameras im Raum mit Hilfe von detektierten Sternen bestimmt werden, die als Referenzpunkte dienten. Die fĂŒr eine Stereoanalyse wichtige relative Orientierung zweier Kameras wurde anschließend unter Zuhilfenahme von Punktkorrespondenzen zwischen den Stereobildern verfeinert. FĂŒr die Stereoanalyse wurde primĂ€r ein Bildanalyse-Algorithmus eingesetzt, welcher sich durch eine hohe geometrische VollstĂ€ndigkeit auszeichnet und auch 3-D Informationen fĂŒr Bildregionen mit geringem Kontrast liefert. In ausgewĂ€hlten FĂ€llen wurden die so rekonstruierten Wolkengeometrien zudem mit einem prĂ€zisen Mehrbild-Stereo-Verfahren verglichen. Eine möglichst vollstĂ€ndige 3-D Wolkengeometrie ist vorteilhaft fĂŒr eine darauffolgende Analyse, die eine Segmentierung und Identifizierung einzelner Wolken, deren raum-zeitliche Verfolgung oder die Ableitung geometrischer GrĂ¶ĂŸen umfasst. Der experimentelle Aufbau im Umfeld des JĂŒlich Observatory for Cloud Evolution (JOYCE) umfasste zuerst eine, spĂ€ter zwei Stereokameras, die jeweils mehrere Kilometer entfernt installiert wurden um unterschiedliche Wolkenpartien rekonstruieren zu können. Ein Vergleich zwischen Stereorekonstruktion und Lidar-Ceilometer zeigte typische Standardabweichungen der Wolkenbasishöhendifferenz von 50 m (1.5 %) bei mittelhoher Altocumulus-Bewölkung und 123 m (7 %) bis 165 m (10 %) bei heterogener Cumulus- und Stratocumulus-Bewölkung. Gleichzeitig wich die rekonstruierte Wolkenbasishöhe im Durchschnitt meist nicht weiter als 2 %, in EinzelfĂ€llen 3-5 % vom entsprechenden Wert des Lidars ab. Im Vergleich zur abgeleiteten Cumulus-Morphologie aus den 2-D ReflektivitĂ€tsprofilen des Wolkenradars, zeigten sich im Zenit-Bereich typische Differenzen zwischen 35 und 81 m. Bei weiter entfernten Wolken (> 2 km) können sich Stereorekonstruktion und ReflektivitĂ€tssignal stark unterscheiden, was neben einer abnehmenden geometrischen Genauigkeit der Stereorekonstruktion in kontrastarmen Bereichen insbesondere mit einer oftmals unzureichenden SensitivitĂ€t des Radars bei kleinen Wolkentröpfchen erklĂ€rt werden kann, wie man sie an der Wolkenbasis und in den Randbereichen von Wolken findet. Die Kombination von Stereoanalyse und der Bewegungsinformation innerhalb einer Bildsequenz erlaubt die Bestimmung von Wolkenzug- und -deformationsvektoren. Neben der Verfolgung einzelner Wolkenstrukturen und der Erfassung von Wolkendynamik (beispielsweise der Deformation von Wolken durch Windscherung), kann im Fall von stratiformen Wolken Windgeschwindigkeit und -richtung abgeschĂ€tzt werden. Ein Vergleich mit Beobachtungen eines Wind-Lidars zeigte hierfĂŒr typische Abweichungen der Windgeschwindigkeit von 1 ms-1 und der Windrichtung von 7° to 10°. Ein besonderer Mehrwert der Methode liegt in einer tiefergehenden Analyse von Morphologie und Lebenszyklus von Cumulus-Wolken. Dies wurde anhand zweier exemplarischer Fallstudien gezeigt, in denen die 3-D-Rekonstruktionen zweier entfernt aufgestellter Stereokameras kombiniert wurden. Im ersten Fall wurde ein sich unter vertikaler Windscherung entwickelnder Cumulus von zwei Seiten aufgenommen, was eine geometrische Erfassung des stark durch Scherung geneigten Wolkenkörpers ermöglichte. Kennwerte wie Vertikalprofil, Neigungswinkel der Wolke und Durchmesser einzelner Höhenschichten wurden abgeschĂ€tzt. Der zweite Fall zeigte eine statistische Analyse eines sich entwickelnden Cumulus ĂŒber seinen Lebenszyklus hinweg. Dies erlaubte die Erstellung einer Zeitreihe mit relevanten Kennzahlen wie Ă€quivalenter Durchmesser, vertikale Ausdehnung, Perimeter oder abgeleitete GrĂ¶ĂŸen wie Aspektrate oder Perimeter-Skalierung. WĂ€hrend die Analyse bisherige Ergebnisse aus Simulationen und satellitengestĂŒtzten Beobachtungen bestĂ€tigt, erlaubt diese aber eine Erweiterung auf die Ebene individueller Wolken und der Ableitung funktionaler ZusammenhĂ€nge wie zum Beispiel dem VerhĂ€ltnis von Wolkendurchmesser und vertikaler Dimension

    Realistic texture in simulated thermal infrared imagery

    Get PDF
    Creating a visually-realistic yet radiometrically-accurate simulation of thermal infrared (TIR) imagery is a challenge that has plagued members of industry and academia alike. The goal of imagery simulation is to provide a practical alternative to the often staggering effort required to collect actual data. Previous attempts at simulating TIR imagery have suffered from a lack of texture—the simulated scenes generally failed to reproduce the natural variability seen in actual TIR images. Realistic synthetic TIR imagery requires modeling sources of variability including surface effects such as solar insolation and convective heat exchange as well as sub-surface effects such as density and water content. This research effort utilized the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, developed at the Rochester Institute of Technology, to investigate how these additional sources of variability could be modeled to correctly and accurately provide simulated TIR imagery. Actual thermal data were collected, analyzed, and exploited to determine the underlying thermodynamic phenomena and ascertain how these phenomena are best modeled. The underlying task was to determine how to apply texture in the thermal region to attain radiometrically-correct, visually-appealing simulated imagery. Three natural desert scenes were used to test the methodologies that were developed for estimating per-pixel thermal parameters which could then be used for TIR image simulation by DIRSIG. Additional metrics were devised and applied to the synthetic images to further quantify the success of this research. The resulting imagery demonstrated that these new methodologies for modeling TIR phenomena and the utilization of an improved DIRSIG tool improved the root mean-squared error (RMSE) of our synthetic TIR imagery by up to 88%
    • 

    corecore