15 research outputs found
Geographic Location Encoding with Spherical Harmonics and Sinusoidal Representation Networks
Learning feature representations of geographical space is vital for any
machine learning model that integrates geolocated data, spanning application
domains such as remote sensing, ecology, or epidemiology. Recent work mostly
embeds coordinates using sine and cosine projections based on Double Fourier
Sphere (DFS) features -- these embeddings assume a rectangular data domain even
on global data, which can lead to artifacts, especially at the poles. At the
same time, relatively little attention has been paid to the exact design of the
neural network architectures these functional embeddings are combined with.
This work proposes a novel location encoder for globally distributed geographic
data that combines spherical harmonic basis functions, natively defined on
spherical surfaces, with sinusoidal representation networks (SirenNets) that
can be interpreted as learned Double Fourier Sphere embedding. We
systematically evaluate the cross-product of positional embeddings and neural
network architectures across various classification and regression benchmarks
and synthetic evaluation datasets. In contrast to previous approaches that
require the combination of both positional encoding and neural networks to
learn meaningful representations, we show that both spherical harmonics and
sinusoidal representation networks are competitive on their own but set
state-of-the-art performances across tasks when combined. We provide source
code at www.github.com/marccoru/locationencode
Sequential Recurrent Encoders for Land Cover Mapping in the Brazilian Amazon using MODIS Imagery and Auxiliary Datasets
To test an existing sequential recurrent encoders model based on convolutional variants of RNNs for the task of LUC classification across the Brazilian Amazon and to compare different arrangements of input features and their impact on the classifier performanc
Model and Data Uncertainty for Satellite Time Series Forecasting with Deep Recurrent Models
Deep Learning is often criticized as black-box method which often provides accurate predictions, but limited explanation of the underlying processes and no indication when to not trust those predictions. Equipping existing deep learning models with an (approximate) notion of uncertainty can help mitigate both these issues therefore their use should be known more broadly in the community. The Bayesian deep learning community has developed model-agnostic and easy to-implement methodology to estimate both data and model uncertainty within deep learning models which is hardly applied in the remote sensing community. In this work, we adopt this methodology for deep recurrent satellite time series forecasting, and test its assumptions on data and model uncertainty. We demonstrate its effectiveness on two applications on climate change, and event change detection and outline limitations
Semi-Supervised Deep Learning Representations in Earth Observation Based Forest Management
In this study, we examine the potential of several self-supervised deep learning models in predicting forest attributes and detecting forest changes using ESA Sentinel-1 and Sentinel-2 images. The performance of the proposed deep learning models is compared to established conventional machine learning approaches. Studied use-cases include mapping of forest disturbance (windthrown forests, snowload damages) using deep change vector analysis, forest height mapping using UNet+ based models, Momentum contrast and regression modeling. Study areas were represented by several boreal forest sites in Finland. Our results indicate that developed methods allow to achieve superior classification and prediction accuracies compared to traditional methodologies and mimimize the amount of necessary in-situ forestry data
End-to-end learned early classification of time series for in-season crop type mapping
International audienc
Image_2_Short-term runoff forecasting in an alpine catchment with a long short-term memory neural network.png
The governing hydrological processes are expected to shift under climate change in the alpine regions of Switzerland. This raises the need for more adaptive and accurate methods to estimate river flow. In high-altitude catchments influenced by snow and glaciers, short-term flow forecasting is challenging, as the exact mechanisms of transient melting processes are difficult to model mathematically and are poorly understood to this date. Machine learning methods, particularly temporally aware neural networks, have been shown to compare well and often outperform process-based hydrological models on medium and long-range forecasting. In this work, we evaluate a Long Short-Term Memory neural network (LSTM) for short-term prediction (up to three days) of hourly river flow in an alpine headwater catchment (Goms Valley, Switzerland). We compare the model with the regional standard, an existing process-based model (named MINERVE) that is used by local authorities and is calibrated on the study area. We found that the LSTM was more accurate than the process-based model on high flows and better represented the diurnal melting cycles of snow and glacier in the area of interest. It was on par with MINERVE in estimating two flood events: the LSTM captures the dynamics of a precipitation-driven flood well, while underestimating the peak discharge during an event with varying conditions between rain and snow. Finally, we analyzed feature importances and tested the transferability of the trained LSTM on a neighboring catchment showing comparable topographic and hydrological features. The accurate results obtained highlight the applicability and competitiveness of data-driven temporal machine learning models with the existing process-based model in the study area.</p
Data_Sheet_1_Short-term runoff forecasting in an alpine catchment with a long short-term memory neural network.pdf
The governing hydrological processes are expected to shift under climate change in the alpine regions of Switzerland. This raises the need for more adaptive and accurate methods to estimate river flow. In high-altitude catchments influenced by snow and glaciers, short-term flow forecasting is challenging, as the exact mechanisms of transient melting processes are difficult to model mathematically and are poorly understood to this date. Machine learning methods, particularly temporally aware neural networks, have been shown to compare well and often outperform process-based hydrological models on medium and long-range forecasting. In this work, we evaluate a Long Short-Term Memory neural network (LSTM) for short-term prediction (up to three days) of hourly river flow in an alpine headwater catchment (Goms Valley, Switzerland). We compare the model with the regional standard, an existing process-based model (named MINERVE) that is used by local authorities and is calibrated on the study area. We found that the LSTM was more accurate than the process-based model on high flows and better represented the diurnal melting cycles of snow and glacier in the area of interest. It was on par with MINERVE in estimating two flood events: the LSTM captures the dynamics of a precipitation-driven flood well, while underestimating the peak discharge during an event with varying conditions between rain and snow. Finally, we analyzed feature importances and tested the transferability of the trained LSTM on a neighboring catchment showing comparable topographic and hydrological features. The accurate results obtained highlight the applicability and competitiveness of data-driven temporal machine learning models with the existing process-based model in the study area.</p