1,021 research outputs found
Neural Fields for Interactive Visualization of Statistical Dependencies in 3D Simulation Ensembles
We present the first neural network that has learned to compactly represent
and can efficiently reconstruct the statistical dependencies between the values
of physical variables at different spatial locations in large 3D simulation
ensembles. Going beyond linear dependencies, we consider mutual information as
a measure of non-linear dependence. We demonstrate learning and reconstruction
with a large weather forecast ensemble comprising 1000 members, each storing
multiple physical variables at a 250 x 352 x 20 simulation grid. By
circumventing compute-intensive statistical estimators at runtime, we
demonstrate significantly reduced memory and computation requirements for
reconstructing the major dependence structures. This enables embedding the
estimator into a GPU-accelerated direct volume renderer and interactively
visualizing all mutual dependencies for a selected domain point
Sea level Projections with Machine Learning using Altimetry and Climate Model ensembles
Satellite altimeter observations retrieved since 1993 show that the global
mean sea level is rising at an unprecedented rate (3.4mm/year). With almost
three decades of observations, we can now investigate the contributions of
anthropogenic climate-change signals such as greenhouse gases, aerosols, and
biomass burning in this rising sea level. We use machine learning (ML) to
investigate future patterns of sea level change. To understand the extent of
contributions from the climate-change signals, and to help in forecasting sea
level change in the future, we turn to climate model simulations. This work
presents a machine learning framework that exploits both satellite observations
and climate model simulations to generate sea level rise projections at a
2-degree resolution spatial grid, 30 years into the future. We train fully
connected neural networks (FCNNs) to predict altimeter values through a
non-linear fusion of the climate model hindcasts (for 1993-2019). The learned
FCNNs are then applied to future climate model projections to predict future
sea level patterns. We propose segmenting our spatial dataset into meaningful
clusters and show that clustering helps to improve predictions of our ML model
Detecting forced change within combined climate fields using explainable neural networks
2021 Fall.Includes bibliographical references.Assessing forced climate change requires the extraction of the forced signal from the background of climate noise. Traditionally, tools for extracting forced climate change signals have focused on one atmospheric variable at a time, however, using multiple variables can reduce noise and allow for easier detection of the forced response. Following previous work, we train artificial neural networks to predict the year of single- and multi-variable maps from forced climate model simulations. To perform this task, the neural networks learn patterns that allow them to discriminate between maps from different yearsâthat is, the neural networks learn the patterns of the forced signal amidst the shroud of internal variability and climate model disagreement. When presented with combined input fields (multiple seasons, variables, or both), the neural networks are able to detect the signal of forced change earlier than when given single fields alone by utilizing complex, nonlinear relationships between multiple variables and seasons. We use layer-wise relevance propagation, a neural network visualization tool, to identify the multivariate patterns learned by the neural networks that serve as reliable indicators of the forced response. These "indicator patterns" vary in time and between climate models, providing a template for investigating inter-model differences in the time evolution of the forced response. This work demonstrates how neural networks and their visualization tools can be harnessed to identify patterns of the forced signal within combined fields
Sample-based Uncertainty Quantification with a Single Deterministic Neural Network
Development of an accurate, flexible, and numerically efficient uncertainty
quantification (UQ) method is one of fundamental challenges in machine
learning. Previously, a UQ method called DISCO Nets has been proposed
(Bouchacourt et al., 2016), which trains a neural network by minimizing the
energy score. In this method, a random noise vector in
is concatenated with the original input vector in
order to produce a diverse ensemble forecast despite using a single neural
network. While this method has shown promising performance on a hand pose
estimation task in computer vision, it remained unexplored whether this method
works as nicely for regression on tabular data, and how it competes with more
recent advanced UQ methods such as NGBoost. In this paper, we propose an
improved neural architecture of DISCO Nets that admits faster and more stable
training while only using a compact noise vector of dimension . We benchmark this approach on miscellaneous real-world tabular
datasets and confirm that it is competitive with or even superior to standard
UQ baselines. Moreover we observe that it exhibits better point forecast
performance than a neural network of the same size trained with the
conventional mean squared error. As another advantage of the proposed method,
we show that local feature importance computation methods such as SHAP can be
easily applied to any subregion of the predictive distribution. A new
elementary proof for the validity of using the energy score to learn predictive
distributions is also provided.Comment: 16 pages, 17 figures, 2 tables. Accepted by the 14th International
Conference on Neural Computation Theory and Applications (NCTA 2022) held as
part of IJCCI 2022, October 24-26, 2022, Valletta, Malt
Forecasting Operation Metrics for Virtualized Network Functions
Network Function Virtualization (NFV) is the key technology that allows modern network operators to provide flexible and efficient services, by leveraging on general-purpose private cloud infrastructures. In this work, we investigate the performance of a number of metric forecasting techniques based on machine learning and artificial intelligence, and provide insights on how they can support the decisions of NFV operation teams. Our analysis focuses on both infrastructure-level and service-level metrics. The former can be fetched directly from the monitoring system of an NFV infrastructure, whereas the latter are typically provided by the monitoring components of the individual virtualized network functions. Our selected forecasting techniques are experimentally evaluated using real-life data, exported from a production environment deployed within some Vodafone NFV data centers. The results show what the compared techniques can achieve in terms of the forecasting accuracy and computational cost required to train them on production data
Classification of Explainable Artificial Intelligence Methods through Their Output Formats
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimensionâthe output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords âexplainable artificial intelligenceâ; âexplainable machine learningâ; and âinterpretable machine learningâ. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulation
- âŠ