17 research outputs found

    An Advanced Deep Learning Framework For Short-Term Precipitation Forecasting FromSatellite Information

    No full text
    Short-term Quantitative Precipitation Forecasting is important for aviation and navigation safety control, flood forecasting, early flood warning, and natural hazard management. Obtaining accurate and timely precipitation forecasts in short range of time (0-6 hours) is a challenging task. Addressing the challenges in forecasting accurate short-term rainfall is an open question in the field of hydrometeorology and is a major objective. This dissertation introduces a machine learning, in specific deep learning, framework to accurately forecast high- and low-intensity precipitation events. In details, this dissertation introduces an effective precipitation forecasting framework by (1) developing an infrared cloud-top brightness temperature forecasting model, and (2) proposing an effective infrared to rainfall intensity mapping model using satellite and radar data. The proposed framework is effective due to(1) solving a physical problem using a continuous infrared data in which evolution is dominated by the continuity law of heat transfer, (2) providing forecasts for various ranges of rainfall intensities, and (3) introducing a framework with potentials to become aquasi-global scale product. As the initial attempt to develop the precipitation forecasting model, a forecasting model was proposed by extrapolating infrared imageries using an advanced Deep Neural Network (DNN) and applying the forecasted infrared into an effective rainfall retrieval algorithm to obtain the short-term precipitation forecasts. To achieve such tasks, we propose a Long Short-Term Memory (LSTM) and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), respectively. The precipitation forecasts obtained from LSTM combined with PERSIANN were compared with a Recurrent Neural Network (RNN), Persistency method, and Farnebackoptical flow each combined with the PERSIANN algorithm and the numerical model results from the first version of Rapid Refresh (RAPv1.0) over three regions in the United States, including the states of Oregon, Oklahoma, and Florida. Furthermore, to improve the forecasting skills of the proposed method, a new infrared forecasting method was developed by improving the LSTM model (such as efficient use of neighborhood pixel information, resolving the loss of resolution problem and introducing more efficient objectives compared to maximum likelihood estimates). The new proposed infrared forecasting algorithm is a semi-conditional Generative AdversarialNetwork (GAN) consisting of convolutional, recurrent (LSTM) and convolutional-recurrent (ConvLSTM) layers in order to forecast temporally and spatially coherent infrared images. The results are compared with the non-adversarial version of the proposed model and demonstrate superior performances. In addition to this precipitation forecasting improvement step, a new precipitation estimation algorithm is introduced to replace the PERSIANN algorithm in order to increase the infrared-rainfall translation accuracy and enable the framework to become an end-to-end model. The new precipitation estimation model is a conditional GAN, termed as PERSIANN-GAN, which translates 0.25◦×0.25◦ infrared data into same resolution precipitation estimates by defining a more flexible objective function. The PERSIANN-GAN results are compared with two Convolutional Neural Networks (CNNs) baseline models one without adversarial part and with bypass connections and the other one without adversarial part and without bypass connections. The model performances were also compared to the well-known operational PERSIANN product. The comparison results demonstrate higher visual and statistical performances of PERSIANN-GAN

    Feature Selection of PERSIANN, based on Multiple Regression Analysis with Principal Component Analysis & Using Three-Cornered Hat method to evaluate Precipitation products

    No full text
    My thesis addresses two aspects of satellite precipitation estimation. In the first chapter,feature selection aspect of PERSIANN algorithm will be discussed. In the second chapter,the Generalized Three-Cornered Hat method is used for intercomparison of PERSIANN-CDR and TRMM and CRU datasets over Iran. For this part, a part of author’s collaboration with Professor Katiraie of Azad University, Tehran (Corresponding author: KatiraieBoroujerdy) will be represented. Chapter three presents the summary and conclusions.The PERSIANN model is an Artificial Neural Network-based (ANN) model for precipitationestimation using satellite information, and the datasets generated by it have gainedpopularity for application in both weather and climate studies. Research related to thePERSIANN system is ongoing, and it mainly focuses on improving its accuracy required for various applications. One of these improvements in the system includes the input feature selection of the model which can help the Neural Network to better learn the precipitation pattern by adding more relevant information. The Multiple Regression Analysis (MRA), by taking the advantage of Principal Component Analysis (PCA) to solve the collinearity is employed as the framework for ranking those features or inputs that are most useful for the learning process.Later on, to evaluate how well the algorithm is doing, a reliable in-situ observation set isrequired in order to test and compare the satellite-based observations. Often we arechallenged with lack of availability of adequate reference ground-based observations. Thisbecame the motivation to come up with a creative and reliable method to compare anydatasets regarding the precipitation characteristics. In order to do that, the use ofGeneralized Three-Cornered Hat (GTCH) for comparing the reliability of each datasetwithout having a reference is presented in chapter two. Using this method has enabled usto compare at least three datasets in order to compare them in spatial resolution

    Robust and Explainable Semi-Supervised Deep Learning Model for Anomaly Detection in Aviation

    No full text
    Identifying safety anomalies and vulnerabilities in the aviation domain is a very expensive and time-consuming task. Currently, it is accomplished via manual forensic reviews by subject matter experts (SMEs). However, with the increase in the amount of data produced in airspace operations, relying on such manual reviews is impractical. Automated approaches, such as exceedance detection, have been deployed to flag safety events which surpass a pre-defined safety threshold. These approaches, however, completely rely on domain knowledge and outcome of the SMEs’ reviews and can only identify purely threshold crossings safety vulnerabilities. Unsupervised and supervised machine learning approaches have been developed in the past to automate the process of anomaly detection and vulnerability discovery in the aviation data, with availability of the labeled data being their differentiator. Purely unsupervised approaches can be prone to high false alarm rates, while a completely supervised approach might not reach optimal performance and generalize well when the size of labeled data is small. This is one of the fundamental challenges in the aviation domain, where the process of obtaining safety labels for the data requires significant time and effort from SMEs and cannot be crowd-sourced to citizen scientists. As a result, the size of properly labeled and reviewed data is often very small in aviation safety and supervised approaches fall short of the optimum performance with such data. In this paper, we develop a Robust and Explainable Semi-supervised deep learning model for Anomaly Detection (RESAD) in aviation data. This approach takes advantage of both majority unlabeled and minority labeled data sets. We develop a case study of multi-class anomaly detection in the approach to landing of commercial aircraft in order to benchmark RESAD’s performance to baseline methods. Furthermore, we develop an optimization scheme where the model is optimized to not only reach maximum accuracy, but also a desired interpretability and robustness to adversarial perturbations

    The Year the West Was Burning: How the 2020 Wildfire Season Got So Extreme

    Get PDF
    More than 4 million acres of California went up in flames in 2020 – about 4% of the state’s land area and more than double its previous wildfire record. Five of the state’s six largest fires on record were burning this year. In Colorado, the Pine Gulch fire broke the record for that state’s largest wildfire, only to be surpassed by two larger blazes, the Cameron Peak and East Troublesome fires. Oregon saw one of the most destructive fire seasons in its recorded history, with more than 4,000 homes destroyed. What caused the 2020 fire season to become so extreme

    Wildfires Force Thousands to Evacuate Near Los Angeles: Here’s How the 2020 Western Fire Season Got So Extreme

    Get PDF
    Two wildfires erupted on the outskirts of cities near Los Angeles, forcing more than 100,000 people to evacuate their homes Monday as powerful Santa Ana winds swept the flames through dry grasses and brush. With strong winds and extremely low humidity, large parts of California were under red flag warnings. High fire risk days have been common this year as the 2020 wildfire season shatters records across the West. More than 4 million acres have burned in California – 4% of the state’s land area and more than double the previous annual record. Five of the state’s six largest historical fires happened in 2020. In Colorado, the Pine Gulch fire that started in June broke the record for size, only to be topped in October by the Cameron Peak and East Troublesome fires. Oregon saw one of the most destructive fire seasons in its recorded history. What caused the 2020 fire season to become so extreme

    Learning Instrument Invariant Characteristics for Generating High-resolution Global Coral Reef Maps

    No full text
    Coral reefs are one of the most biologically complex and diverse ecosystems within the shallow marine environment. Unfortunately, these underwater ecosystems are threatened by a number of anthropogenic challenges, including ocean acidification and warming, overfishing, and the continued increase of marine debris in oceans. This requires a comprehensive assessment of the world's coastal environments, including a quantitative analysis on the health and extent of coral reefs and other associated marine species, as a vital Earth Science measurement. However, limitations in observational and technological capabilities inhibit global sustained imaging of the marine environment. Harmonizing multimodal data sets acquired using different remote sensing instruments presents additional challenges, thereby limiting the availability of good quality labeled data for analysis. In this work, we develop a deep learning model for extracting domain invariant features from multimodal remote sensing imagery and creating high-resolution global maps of coral reefs by combining various sources of imagery and limited hand-labeled data available for certain regions. This framework allows us to generate, for the first time, coral reef segmentation maps at 2-meter resolution, which is a significant improvement over the kilometer-scale state-of-the-art maps. Additionally, this framework doubles accuracy and IoU metrics over baselines that do not account for domain invariance

    A Deep Learning Image Segmentation Model for Agricultural Irrigation System Classification

    No full text
    Effective water management requires a large-scale understanding of agricultural irrigation systems and how they shift in response to various stressors. Here, we leveraged advances in Machine Learning and availability of very high resolution remote sensing imagery to help resolve this long-standing issue. To this end, we developed a deep learning model to classify irrigation systems at a regional scale using remote sensing imagery. After testing different model architectures, hyper parameters, class weights and image sizes, we selected a U-Net architecture with a Resnet-34 backbone for this purpose. We applied transfer learning to increase training efficiency and model performance. We considered four irrigation systems as well as urban and background areas as land use/cover classes, and applied the model to 8,600 very high resolution (1 m) images, labeled with ground-truth observations of irrigation types, in a case study in Idaho, USA. Images were obtained from the US Department of Agriculture’s National Agriculture Imagery Program. Our model achieved state-of-the-art performance for segmentation of different classes on the train data (85% to 94%), validation data (72% to 86%), and test data (70% to 86%), which attests to the efficacy of the model for the segmentation of images based on spatial features. Aside from leveraging deep learning and remote sensing for resolving the standing real-world problem of multiple irrigation type segmentation, this study develops and publicly shares labeled data, as well as a trained deep learning model, for irrigation type segmentation that can be applied/transferred to other regions globally. Furthermore, this study offers novel information about the impacts of transfer learning, imbalanced training data, and efficacy of various model structures for multiple irrigation type segmentation

    Spatiotemporal Variations of Precipitation over Iran Using the High-Resolution and Nearly Four Decades Satellite-Based PERSIANN-CDR Dataset

    No full text
    Spatiotemporal precipitation trend analysis provides valuable information for water management decision-making. Satellite-based precipitation products with high spatial and temporal resolution and long records, as opposed to temporally and spatially sparse rain gauge networks, are a suitable alternative to analyze precipitation trends over Iran. This study analyzes the trends in annual, seasonal, and monthly precipitation along with the contribution of each season and month in the annual precipitation over Iran for the 1983–2018 period. For the analyses, the Mann–Kendall test is applied to the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) estimates. The results of annual, seasonal, and monthly precipitation trends indicate that the significant decreases in the monthly precipitation trends in February over the western (March over the western and central-eastern) regions of Iran cause significant effects on winter (spring) and total annual precipitation. Moreover, the increases in the amounts of precipitation during November in the south and south-east regions lead to a remarkable increase in the amount of precipitation during the fall season. The analysis of the contribution of each season and month to annual precipitation in wet and dry years shows that dry years have critical impacts on decreasing monthly precipitation over a particular region. For instance, a remarkable decrease in precipitation amounts is detectable during dry years over the eastern, northeastern, and southwestern regions of Iran during March, April, and December, respectively. The results of this study show that PERSIANN-CDR is a valuable source of information in low-density gauge network areas, capturing spatiotemporal variation of precipitation
    corecore