1,878 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
The consolidated European synthesis of CHâ and NâO emissions for the European Union and United Kingdom: 1990â2019
Knowledge of the spatial distribution of the fluxes of greenhouse gases (GHGs) and their temporal variability as well as flux attribution to natural and anthropogenic processes is essential to monitoring the progress in mitigating anthropogenic emissions under the Paris Agreement and to inform its global stocktake. This study provides a consolidated synthesis of CHâ and NâO emissions using bottom-up (BU) and top-down (TD) approaches for the European Union and UK (EU27â+âUK) and updates earlier syntheses (Petrescu et al., 2020, 2021). The work integrates updated emission inventory data, process-based model results, data-driven sector model results and inverse modeling estimates, and it extends the previous period of 1990â2017 to 2019. BU and TD products are compared with European national greenhouse gas inventories (NGHGIs) reported by parties under the United Nations Framework Convention on Climate Change (UNFCCC) in 2021. Uncertainties in NGHGIs, as reported to the UNFCCC by the EU and its member states, are also included in the synthesis. Variations in estimates produced with other methods, such as atmospheric inversion models (TD) or spatially disaggregated inventory datasets (BU), arise from diverse sources including within-model uncertainty related to parameterization as well as structural differences between models. By comparing NGHGIs with other approaches, the activities included are a key source of bias between estimates, e.g., anthropogenic and natural fluxes, which in atmospheric inversions are sensitive to the prior geospatial distribution of emissions. For CHâ emissions, over the updated 2015â2019 period, which covers a sufficiently robust number of overlapping estimates, and most importantly the NGHGIs, the anthropogenic BU approaches are directly comparable, accounting for mean emissions of 20.5âTgâCHââyrc (EDGARv6.0, last year 2018) and 18.4âTgâCHââyrâ»Âč (GAINS, last year 2015), close to the NGHGI estimates of 17.5±2.1âTgâCHââyrâ»Âč. TD inversion estimates give higher emission estimates, as they also detect natural emissions. Over the same period, high-resolution regional TD inversions report a mean emission of 34âTgâCHââyrâ»Âč. Coarser-resolution global-scale TD inversions result in emission estimates of 23 and 24âTgâCHââyrâ»Âč inferred from GOSAT and surface (SURF) network atmospheric measurements, respectively. The magnitude of natural peatland and mineral soil emissions from the JSBACHâHIMMELI model, natural rivers, lake and reservoir emissions, geological sources, and biomass burning together could account for the gap between NGHGI and inversions and account for 8âTgâCHââyrâ»Âč. For NâO emissions, over the 2015â2019 period, both BU products (EDGARv6.0 and GAINS) report a mean value of anthropogenic emissions of 0.9âTgâNâOâyrâ»Âč, close to the NGHGI data (0.8±55â%âTgâNâOâyrâ»Âč). Over the same period, the mean of TD global and regional inversions was 1.4âTgâNâOâyrâ»Âč (excluding TOMCAT, which reported no data). The TD and BU comparison method defined in this study can be operationalized for future annual updates for the calculation of CHâ and NâO budgets at the national and EU27â+âUK scales. Future comparability will be enhanced with further steps involving analysis at finer temporal resolutions and estimation of emissions over intra-annual timescales, which is of great importance for CHâ and NâO, and may help identify sector contributions to divergence between prior and posterior estimates at the annual and/or inter-annual scale. Even if currently comparison between CHâ and NâO inversion estimates and NGHGIs is highly uncertain because of the large spread in the inversion results, TD inversions inferred from atmospheric observations represent the most independent data against which inventory totals can be compared. With anticipated improvements in atmospheric modeling and observations, as well as modeling of natural fluxes, TD inversions may arguably emerge as the most powerful tool for verifying emission inventories for CHâ, NâO and other GHGs. The referenced datasets related to figures are visualized at https://doi.org/10.5281/zenodo.7553800 (Petrescu et al., 2023)
The Geometry and Calculus of Losses
Statistical decision problems lie at the heart of statistical machine
learning. The simplest problems are binary and multiclass classification and
class probability estimation. Central to their definition is the choice of loss
function, which is the means by which the quality of a solution is evaluated.
In this paper we systematically develop the theory of loss functions for such
problems from a novel perspective whose basic ingredients are convex sets with
a particular structure. The loss function is defined as the subgradient of the
support function of the convex set. It is consequently automatically proper
(calibrated for probability estimation). This perspective provides three novel
opportunities. It enables the development of a fundamental relationship between
losses and (anti)-norms that appears to have not been noticed before. Second,
it enables the development of a calculus of losses induced by the calculus of
convex sets which allows the interpolation between different losses, and thus
is a potential useful design tool for tailoring losses to particular problems.
In doing this we build upon, and considerably extend existing results on
-sums of convex sets. Third, the perspective leads to a natural theory of
``polar'' loss functions, which are derived from the polar dual of the convex
set defining the loss, and which form a natural universal substitution function
for Vovk's aggregating algorithm.Comment: 65 pages, 17 figure
Pairwise versus mutual independence: visualisation, actuarial applications and central limit theorems
Accurately capturing the dependence between risks, if it exists, is an increasingly relevant topic of actuarial research. In recent years, several authors have started to relax the traditional 'independence assumption', in a variety of actuarial settings. While it is known that 'mutual independence' between random variables is not equivalent to their 'pairwise independence', this thesis aims to provide a better understanding of the materiality of this difference. The distinction between mutual and pairwise independence matters because, in practice, dependence is often assessed via pairs only, e.g., through correlation matrices, rank-based measures of association, scatterplot matrices, heat-maps, etc. Using such pairwise methods, it is possible to miss some forms of dependence. In this thesis, we explore how material the difference between pairwise and mutual independence is, and from several angles.
We provide relevant background and motivation for this thesis in Chapter 1, then conduct a literature review in Chapter 2.
In Chapter 3, we focus on visualising the difference between pairwise and mutual independence. To do so, we propose a series of theoretical examples (some of them new) where random variables are pairwise independent but (mutually) dependent, in short, PIBD. We then develop new visualisation tools and use them to illustrate what PIBD variables can look like. We showcase that the dependence involved is possibly very strong. We also use our visualisation tools to identify subtle forms of dependence, which would otherwise be hard to detect.
In Chapter 4, we review common dependence models (such has elliptical distributions and Archimedean copulas) used in actuarial science and show that they do not allow for the possibility of PIBD data. We also investigate concrete consequences of the 'nonequivalence' between pairwise and mutual independence. We establish that many results which hold for mutually independent variables do not hold under sole pairwise independent. Those include results about finite sums of random variables, extreme value theory and bootstrap methods. This part thus illustrates what can potentially 'go wrong' if one assumes mutual independence where only pairwise independence holds.
Lastly, in Chapters 5 and 6, we investigate the question of what happens for PIBD variables 'in the limit', i.e., when the sample size goes to infi nity. We want to see if the 'problems' caused by dependence vanish for sufficiently large samples. This is a broad question, and we concentrate on the important classical Central Limit Theorem (CLT), for which we fi nd that the answer is largely negative. In particular, we construct new sequences of PIBD variables (with arbitrary margins) for which a CLT does not hold. We derive explicitly the asymptotic distribution of the standardised mean of our sequences, which allows us to illustrate the extent of the 'failure' of a CLT for PIBD variables. We also propose a general methodology to construct dependent K-tuplewise independent (K an arbitrary integer) sequences of random variables with arbitrary margins. In the case K = 3, we use this methodology to derive explicit examples of triplewise independent sequences for which no CLT hold. Those results illustrate that mutual independence is a crucial assumption within CLTs, and that having larger samples is not always a viable solution to the problem of non-independent data
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This ïŹfth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ïŹelds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiïŹed Proportional ConïŹict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiïŹers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiïŹcation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiïŹcation.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiïŹcation, and hybrid techniques mixing deep learning with belief functions as well
Deciphering Radio Emission from Solar Coronal Mass Ejections using High-fidelity Spectropolarimetric Radio Imaging
Coronal mass ejections (CMEs) are large-scale expulsions of plasma and
magnetic fields from the Sun into the heliosphere and are the most important
driver of space weather. The geo-effectiveness of a CME is primarily determined
by its magnetic field strength and topology. Measurement of CME magnetic
fields, both in the corona and heliosphere, is essential for improving space
weather forecasting. Observations at radio wavelengths can provide several
remote measurement tools for estimating both strength and topology of the CME
magnetic fields. Among them, gyrosynchrotron (GS) emission produced by
mildly-relativistic electrons trapped in CME magnetic fields is one of the
promising methods to estimate magnetic field strength of CMEs at lower and
middle coronal heights. However, GS emissions from some parts of the CME are
much fainter than the quiet Sun emission and require high dynamic range (DR)
imaging for their detection. This thesis presents a state-of-the-art
calibration and imaging algorithm capable of routinely producing high DR
spectropolarimetric snapshot solar radio images using data from a new
technology radio telescope, the Murchison Widefield Array. This allows us to
detect much fainter GS emissions from CME plasma at much higher coronal
heights. For the first time, robust circular polarization measurements have
been jointly used with total intensity measurements to constrain the GS model
parameters, which has significantly improved the robustness of the estimated GS
model parameters. A piece of observational evidence is also found that
routinely used homogeneous and isotropic GS models may not always be sufficient
to model the observations. In the future, with upcoming sensitive telescopes
and physics-based forward models, it should be possible to relax some of these
assumptions and make this method more robust for estimating CME plasma
parameters at coronal heights.Comment: 297 pages, 100 figures, 9 tables. Submitted at Tata Institute of
Fundamental Research, Mumbai, India, Ph.D Thesi
Multiscale hierarchical decomposition methods for images corrupted by multiplicative noise
Recovering images corrupted by multiplicative noise is a well known
challenging task. Motivated by the success of multiscale hierarchical
decomposition methods (MHDM) in image processing, we adapt a variety of both
classical and new multiplicative noise removing models to the MHDM form. On the
basis of previous work, we further present a tight and a refined version of the
corresponding multiplicative MHDM. We discuss existence and uniqueness of
solutions for the proposed models, and additionally, provide convergence
properties. Moreover, we present a discrepancy principle stopping criterion
which prevents recovering excess noise in the multiscale reconstruction.
Through comprehensive numerical experiments and comparisons, we qualitatively
and quantitatively evaluate the validity of all proposed models for denoising
and deblurring images degraded by multiplicative noise. By construction, these
multiplicative multiscale hierarchical decomposition methods have the added
benefit of recovering many scales of an image, which can provide features of
interest beyond image denoising
The consolidated European synthesis of CH4 and N2O emissions for the European Union and United Kingdom : 1990-2019
Funding Information: We thank AurĂ©lie Paquirissamy, GĂ©raud Moulas and the ARTTIC team for the great managerial support offered during the project. FAOSTAT statistics are produced and disseminated with the support of its member countries to the FAO regular budget. Annual, gap-filled and harmonized NGHGI uncertainty estimates for the EU and its member states were provided by the EU GHG inventory team (European Environment Agency and its European Topic Centre on Climate change mitigation). Most top-down inverse simulations referred to in this paper rely for the derivation of optimized flux fields on observational data provided by surface stations that are part of networks like ICOS (datasets: 10.18160/P7E9-EKEA , Integrated Non-CO Observing System, 2018a, and 10.18160/B3Q6-JKA0 , Integrated Non-CO Observing System, 2018b), AGAGE, NOAA (Obspack Globalview CH: 10.25925/20221001 , Schuldt et al., 2017), CSIRO and/or WMO GAW. We thank all station PIs and their organizations for providing these valuable datasets. We acknowledge the work of other members of the EDGAR group (Edwin Schaaf, Jos Olivier) and the outstanding scientific contribution to the VERIFY project of Peter Bergamaschi. Timo Vesala thanks ICOS-Finland, University of Helsinki. The TM5-CAMS inversions are available from https://atmosphere.copernicus.eu (last access: June 2022); Arjo Segers acknowledges support from the Copernicus Atmosphere Monitoring Service, implemented by the European Centre for Medium-Range Weather Forecasts on behalf of the European Commission (grant no. CAMS2_55). This research has been supported by the European Commission, Horizon 2020 Framework Programme (VERIFY, grant no. 776810). Ronny Lauerwald received support from the CLand Convergence Institute. Prabir Patra received support from the Environment Research and Technology Development Fund (grant no. JPMEERF20182002) of the Environmental Restoration and Conservation Agency of Japan. Pierre Regnier received financial support from the H2020 project ESM2025 â Earth System Models for the Future (grant no. 101003536). David Basviken received support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (METLAKE, grant no. 725546). Greet Janssens-Maenhout received support from the European Union's Horizon 2020 research and innovation program (CoCO, grant no. 958927). Tuula Aalto received support from the Finnish Academy (grants nos. 351311 and 345531). Sönke Zhaele received support from the ERC consolidator grant QUINCY (grant no. 647204).Peer reviewedPublisher PD
Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation
We study robust reinforcement learning (RL) with the goal of determining a
well-performing policy that is robust against model mismatch between the
training simulator and the testing environment. Previous policy-based robust RL
algorithms mainly focus on the tabular setting under uncertainty sets that
facilitate robust policy evaluation, but are no longer tractable when the
number of states scales up. To this end, we propose two novel uncertainty set
formulations, one based on double sampling and the other on an integral
probability metric. Both make large-scale robust RL tractable even when one
only has access to a simulator. We propose a robust natural actor-critic (RNAC)
approach that incorporates the new uncertainty sets and employs function
approximation. We provide finite-time convergence guarantees for the proposed
RNAC algorithm to the optimal robust policy within the function approximation
error. Finally, we demonstrate the robust performance of the policy learned by
our proposed RNAC approach in multiple MuJoCo environments and a real-world
TurtleBot navigation task
Singularity Formation in the High-Dimensional Euler Equations and Sampling of High-Dimensional Distributions by Deep Generative Networks
High dimensionality brings both opportunities and challenges to the study of applied mathematics. This thesis consists of two parts. The first part explores the singularity formation of the axisymmetric incompressible Euler equations with no swirl in ââż, which is closely related to the Millennium Prize Problem on the global singularity of the Navier-Stokes equations. In this part, the high dimensionality contributes to the singularity formation in finite time by enhancing the strength of the vortex stretching term. The second part focuses on sampling from a high-dimensional distribution using deep generative networks, which has wide applications in the Bayesian inverse problem and the image synthesis task. The high dimensionality in this part becomes a significant challenge to the numerical algorithms, known as the curse of dimensionality.
In the first part of this thesis, we consider the singularity formation in two scenarios. In the first scenario, for the axisymmetric Euler equations with no swirl, we consider the case when the initial condition for the angular vorticity is Cα Hölder continuous. We provide convincing numerical examples where the solutions develop potential self-similar blow-up in finite time when the Hölder exponent α < α*, and this upper bound α* can asymptotically approach 1 - 2/n. This result supports a conjecture from Drivas and Elgindi [37], and generalizes it to the high-dimensional case. This potential blow-up is insensitive to the perturbation of initial data. Based on assumptions summarized from numerical experiments, we study a limiting case of the Euler equations, and obtain α* = 1 - 2/n which agrees with the numerical result. For the general case, we propose a relatively simple one-dimensional model and numerically verify its approximation to the Euler equations. This one-dimensional model might suggest a possible way to show this finite-time blow-up scenario analytically. Compared to the first proved blow-up result of the 3D axisymmetric Euler equations with no swirl and Hölder continuous initial data by Elgindi in [40], our potential blow-up scenario has completely different scaling behavior and regularity of the initial condition. In the second scenario, we consider using smooth initial data, but modify the Euler equations by adding a factor Δ as the coefficient of the convection terms to weaken the convection effect. The new model is called the weak convection model. We provide convincing numerical examples of the weak convection model where the solutions develop potential self-similar blow-up in finite time when the convection strength Δ < Δ*, and this upper bound Δ* should be close to 1 - 2/n. This result is closely related to the infinite-dimensional case of an open question [37] stated by Drivas and Elgindi. Our numerical observations also inspire us to approximate the weak convection model with a one-dimensional model. We give a rigorous proof that the one-dimensional model will develop finite-time blow-up if Δ < 1 - 2/n, and study the approximation quality of the one-dimensional model to the weak convection model numerically, which could be beneficial to a rigorous proof of the potential finite-time blow-up.
In the second part of the thesis, we propose the Multiscale Invertible Generative Network (MsIGN) to sample from high-dimensional distributions by exploring the low-dimensional structure in the target distribution. The MsIGN models a transport map from a known reference distribution to the target distribution, and thus is very efficient in generating uncorrelated samples compared to MCMC-type methods. The MsIGN captures multiple modes in the target distribution by generating new samples hierarchically from a coarse scale to a fine scale with the help of a novel prior conditioning layer. The hierarchical structure of the MsIGN also allows training in a coarse-to-fine scale manner. The Jeffreys divergence is used as the objective function in training to avoid mode collapse. Importance sampling based on the prior conditioning layer is leveraged to estimate the Jeffreys divergence, which is intractable in previous deep generative networks. Numerically, when applied to two Bayesian inverse problems, the MsIGN clearly captures multiple modes in the high-dimensional posterior and approximates the posterior accurately, demonstrating its superior performance compared with previous methods. We also provide an ablation study to show the necessity of our proposed network architecture and training algorithm for the good numerical performance. Moreover, we also apply the MsIGN to the image synthesis task, where it achieves superior performance in terms of bits-per-dimension value over other flow-based generative models and yields very good interpretability of its neurons in intermediate layers.</p
- âŠ