59 research outputs found

    Detection of Hail Storms in Radar Imagery Using Deep Learning

    Get PDF
    In 2016, hail was responsible for 3.5 billion and 23 million dollars in damage to property and crops, respectively, making it the second costliest weather phenomenon in the United States. In an effort to improve hail-prediction techniques and reduce the societal impacts associated with hail storms, we propose a deep learning technique that leverages radar imagery for automatic detection of hail storms. The technique is applied to radar imagery from 2011 to 2016 for the contiguous United States and achieved a precision of 0.848. Hail storms are primarily detected through the visual interpretation of radar imagery (Mrozet al., 2017). With radars providing data every two minutes, the detection of hail storms has become a big data task. As a result, scientists have turned to neural networks that employ computer vision to identify hail-bearing storms (Marzbanet al., 2001). In this study, we propose a deep Convolutional Neural Network (ConvNet) to understand the spatial features and patterns of radar echoes for detecting hailstorms

    Machine Learning-Based Atmospheric Phenomena Detection Platform

    Get PDF
    As the number of Earth pointing satellites has increased over the last several decades, the data volume retrieved from instruments onboard these satellites has also increased. It is expected that this trend will continue as more data intensive missions and small satellite constellations are launched. Currently, feature detection - namely atmospheric phenomena - in these datasets is performed manually and is thus not scalable with the growing data archives. Recent advancements in computational efficiency allow for the Earth science community to leverage machine learning to identify interesting atmospheric phenomena. Given the wide range of distinctive features in various atmospheric phenomena, a specialized machine learning model is required for accurate detection of these phenomena independently. The Phenomena Portal, developed at NASA IMPACT, is designed to provide visualization for the output from these machine learning models. In addition, detected events for each atmospheric phenomena are stored in a database that can be used to more easily use/subset larger spatiotemporal datasets. The user interface also incorporates additional features to enhance the user experience including spatiotemporal analysis, multiple base layer images, and a slider to filter events with lower probabilities of positive detection. Each detection supports user feedback on whether the detection is true or false that can then be stored and used to improve the machine learning model performance

    Tropical Cyclone Intensity Estimation Using Deep Convolutional Neural Networks

    Get PDF
    Estimating tropical cyclone intensity by just using satellite image is a challenging problem. With successful application of the Dvorak technique for more than 30 years along with some modifications and improvements, it is still used worldwide for tropical cyclone intensity estimation. A number of semi-automated techniques have been derived using the original Dvorak technique. However, these techniques suffer from subjective bias as evident from the most recent estimations on October 10, 2017 at 1500 UTC for Tropical Storm Ophelia: The Dvorak intensity estimates ranged from T2.3/33 kt (Tropical Cyclone Number 2.3/33 knots) from UW-CIMSS (University of Wisconsin-Madison - Cooperative Institute for Meteorological Satellite Studies) to T3.0/45 kt from TAFB (the National Hurricane Center's Tropical Analysis and Forecast Branch) to T4.0/65 kt from SAB (NOAA/NESDIS Satellite Analysis Branch). In this particular case, two human experts at TAFB and SAB differed by 20 knots in their Dvorak analyses, and the automated version at the University of Wisconsin was 12 knots lower than either of them. The National Hurricane Center (NHC) estimates about 10-20 percent uncertainty in its post analysis when only satellite based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to tropical cyclone intensity. This study aims to utilize deep learning, the current state of the art in pattern recognition and image recognition, to address the need for an automated and objective tropical cyclone intensity estimation. Deep learning is a multi-layer neural network consisting of several layers of simple computational units. It learns discriminative features without relying on a human expert to identify which features are important. Our study mainly focuses on convolutional neural network (CNN), a deep learning algorithm, to develop an objective tropical cyclone intensity estimation. CNN is a supervised learning algorithm requiring a large number of training data. Since the archives of intensity data and tropical cyclone centric satellite images is openly available for use, the training data is easily created by combining the two. Results, case studies, prototypes, and advantages of this approach will be discussed

    Collaborative WorkBench for Researchers - Work Smarter, Not Harder

    Get PDF
    It is important to define some commonly used terminology related to collaboration to facilitate clarity in later discussions. We define provisioning as infrastructure capabilities such as computation, storage, data, and tools provided by some agency or similarly trusted institution. Sharing is defined as the process of exchanging data, programs, and knowledge among individuals (often strangers) and groups. Collaboration is a specialized case of sharing. In collaboration, sharing with others (usually known colleagues) is done in pursuit of a common scientific goal or objective. Collaboration entails more dynamic and frequent interactions and can occur at different speeds. Synchronous collaboration occurs in real time such as editing a shared document on the fly, chatting, video conference, etc., and typically requires a peer-to-peer connection. Asynchronous collaboration is episodic in nature based on a push-pull model. Examples of asynchronous collaboration include email exchanges, blogging, repositories, etc. The purpose of a workbench is to provide a customizable framework for different applications. Since the workbench will be common to all the customized tools, it promotes building modular functionality that can be used and reused by multiple tools. The objective of our Collaborative Workbench (CWB) is thus to create such an open and extensible framework for the Earth Science community via a set of plug-ins. Our CWB is based on the Eclipse [2] Integrated Development Environment (IDE), which is designed as a small kernel containing a plug-in loader for hundreds of plug-ins. The kernel itself is an implementation of a known specification to provide an environment for the plug-ins to execute. This design enables modularity, where discrete chunks of functionality can be reused to build new applications. The minimal set of plug-ins necessary to create a client application is called the Eclipse Rich Client Platform (RCP) [3]; The Eclipse RCP also supports thousands of community-contributed plug-ins, making it a popular development platform for many diverse applications including the Science Activity Planner developed at JPL for the Mars rovers [4] and the scientific experiment tool Gumtree [5]. By leveraging the Eclipse RCP to provide an open, extensible framework, a CWB supports customizations via plug-ins to build rich user applications specific for Earth Science. More importantly, CWB plug-ins can be used by existing science tools built off Eclipse such as IDL or PyDev to provide seamless collaboration functionalities
    • …
    corecore