7 research outputs found
Deep Learning for Rapid Landslide Detection using Synthetic Aperture Radar (SAR) Datacubes
With climate change predicted to increase the likelihood of landslide events,
there is a growing need for rapid landslide detection technologies that help
inform emergency responses. Synthetic Aperture Radar (SAR) is a remote sensing
technique that can provide measurements of affected areas independent of
weather or lighting conditions. Usage of SAR, however, is hindered by domain
knowledge that is necessary for the pre-processing steps and its interpretation
requires expert knowledge. We provide simplified, pre-processed,
machine-learning ready SAR datacubes for four globally located landslide events
obtained from several Sentinel-1 satellite passes before and after a landslide
triggering event together with segmentation maps of the landslides. From this
dataset, using the Hokkaido, Japan datacube, we study the feasibility of
SAR-based landslide detection with supervised deep learning (DL). Our results
demonstrate that DL models can be used to detect landslides from SAR data,
achieving an Area under the Precision-Recall curve exceeding 0.7. We find that
additional satellite visits enhance detection performance, but that early
detection is possible when SAR data is combined with terrain information from a
digital elevation model. This can be especially useful for time-critical
emergency interventions. Code is made publicly available at
https://github.com/iprapas/landslide-sar-unet.Comment: Accepted in the NeurIPS 2022 workshop on Tackling Climate Change with
Machine Learning. Authors Vanessa Boehm, Wei Ji Leong, Ragini Bal Mahesh,
Ioannis Prapas contributed equally as researchers for the Frontier
Development Lab (FDL) 202
Technology readiness levels for machine learning systems
The development and deployment of machine learning (ML) systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end. The lack of diligence can lead to technical debt, scope creep and misaligned objectives, model misuse and failures, and expensive consequences. Engineering systems, on the other hand, follow well-defined processes and testing standards to streamline development for high-quality, reliable results. The extreme is spacecraft systems, where mission critical measures and robustness are ingrained in the development process. Drawing on experience in both spacecraft engineering and ML (from research through product across domain areas), we have developed a proven systems engineering approach for machine learning development and deployment. Our Machine Learning Technology Readiness Levels (MLTRL) framework defines a principled process to ensure robust, reliable, and responsible systems while being streamlined for ML workflows, including key distinctions from traditional software engineering. Even more, MLTRL defines a lingua franca for people across teams and organizations to work collaboratively on artificial intelligence and machine learning technologies. Here we describe the framework and elucidate it with several real world use-cases of developing ML methods from basic research through productization and deployment, in areas such as medical diagnostics, consumer computer vision, satellite imagery, and particle physics
Technology Readiness Levels for Machine Learning Systems
The development and deployment of machine learning (ML) systems can be
executed easily with modern tools, but the process is typically rushed and
means-to-an-end. The lack of diligence can lead to technical debt, scope creep
and misaligned objectives, model misuse and failures, and expensive
consequences. Engineering systems, on the other hand, follow well-defined
processes and testing standards to streamline development for high-quality,
reliable results. The extreme is spacecraft systems, where mission critical
measures and robustness are ingrained in the development process. Drawing on
experience in both spacecraft engineering and ML (from research through product
across domain areas), we have developed a proven systems engineering approach
for machine learning development and deployment. Our "Machine Learning
Technology Readiness Levels" (MLTRL) framework defines a principled process to
ensure robust, reliable, and responsible systems while being streamlined for ML
workflows, including key distinctions from traditional software engineering.
Even more, MLTRL defines a lingua franca for people across teams and
organizations to work collaboratively on artificial intelligence and machine
learning technologies. Here we describe the framework and elucidate it with
several real world use-cases of developing ML methods from basic research
through productization and deployment, in areas such as medical diagnostics,
consumer computer vision, satellite imagery, and particle physics
Technology readiness levels for machine learning systems
The development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end. Lack of diligence can lead to technical debt, scope creep and misaligned objectives, model misuse and failures, and expensive consequences. Engineering systems, on the other hand, follow well-defined processes and testing standards to streamline development for high-quality, reliable results. The extreme is spacecraft systems, with mission critical measures and robustness throughout the process. Drawing on experience in both spacecraft engineering and machine learning (research through product across domain areas), we’ve developed a proven systems engineering approach for machine learning and artificial intelligence: the Machine Learning Technology Readiness Levels framework defines a principled process to ensure robust, reliable, and responsible systems while being streamlined for machine learning workflows, including key distinctions from traditional software engineering, and a lingua franca for people across teams and organizations to work collaboratively on machine learning and artificial intelligence technologies. Here we describe the framework and elucidate with use-cases from physics research to computer vision apps to medical diagnostics
Global geomagnetic perturbation forecasting using Deep Learning
Geomagnetically Induced Currents (GICs) arise from spatio-temporal changes to
Earth's magnetic field which arise from the interaction of the solar wind with
Earth's magnetosphere, and drive catastrophic destruction to our
technologically dependent society. Hence, computational models to forecast GICs
globally with large forecast horizon, high spatial resolution and temporal
cadence are of increasing importance to perform prompt necessary mitigation.
Since GIC data is proprietary, the time variability of horizontal component of
the magnetic field perturbation (dB/dt) is used as a proxy for GICs. In this
work, we develop a fast, global dB/dt forecasting model, which forecasts 30
minutes into the future using only solar wind measurements as input. The model
summarizes 2 hours of solar wind measurement using a Gated Recurrent Unit, and
generates forecasts of coefficients which are folded with a spherical harmonic
basis to enable global forecasts. When deployed, our model produces results in
under a second, and generates global forecasts for horizontal magnetic
perturbation components at 1-minute cadence. We evaluate our model across
models in literature for two specific storms of 5 August 2011 and 17 March
2015, while having a self-consistent benchmark model set. Our model
outperforms, or has consistent performance with state-of-the-practice high time
cadence local and low time cadence global models, while also
outperforming/having comparable performance with the benchmark models. Such
quick inferences at high temporal cadence and arbitrary spatial resolutions may
ultimately enable accurate forewarning of dB/dt for any place on Earth,
resulting in precautionary measures to be taken in an informed manner.Comment: 23 pages, 8 figures, 5 tables; accepted for publication in AGU:
Spaceweathe