59 research outputs found

    The Fifteenth Marcel Grossmann Meeting

    Get PDF
    The three volumes of the proceedings of MG15 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. The scientific program of the meeting included 40 morning plenary talks over 6 days, 5 evening popular talks and nearly 100 parallel sessions on 71 topics spread over 4 afternoons. These proceedings are a representative sample of the very many oral and poster presentations made at the meeting.Part A contains plenary and review articles and the contributions from some parallel sessions, while Parts B and C consist of those from the remaining parallel sessions. The contents range from the mathematical foundations of classical and quantum gravitational theories including recent developments in string theory, to precision tests of general relativity including progress towards the detection of gravitational waves, and from supernova cosmology to relativistic astrophysics, including topics such as gamma ray bursts, black hole physics both in our galaxy and in active galactic nuclei in other galaxies, and neutron star, pulsar and white dwarf astrophysics. Parallel sessions touch on dark matter, neutrinos, X-ray sources, astrophysical black holes, neutron stars, white dwarfs, binary systems, radiative transfer, accretion disks, quasars, gamma ray bursts, supernovas, alternative gravitational theories, perturbations of collapsed objects, analog models, black hole thermodynamics, numerical relativity, gravitational lensing, large scale structure, observational cosmology, early universe models and cosmic microwave background anisotropies, inhomogeneous cosmology, inflation, global structure, singularities, chaos, Einstein-Maxwell systems, wormholes, exact solutions of Einstein's equations, gravitational waves, gravitational wave detectors and data analysis, precision gravitational measurements, quantum gravity and loop quantum gravity, quantum cosmology, strings and branes, self-gravitating systems, gamma ray astronomy, cosmic rays and the history of general relativity

    Set-Valued Analysis

    Get PDF
    This Special Issue contains eight original papers with a high impact in various domains of set-valued analysis. Set-valued analysis has made remarkable progress in the last 70 years, enriching itself continuously with new concepts, important results, and special applications. Different problems arising in the theory of control, economics, game theory, decision making, nonlinear programming, biomathematics, and statistics have strengthened the theoretical base and the specific techniques of set-valued analysis. The consistency of its theoretical approach and the multitude of its applications have transformed set-valued analysis into a reference field of modern mathematics, which attracts an impressive number of researchers

    Nonequilibrium Quantum Field Theory

    Get PDF
    Bringing together the key ideas from nonequilibrium statistical mechanics and powerful methodology from quantum field theory, this 2008 book captures the essence of nonequilibrium quantum field theory. Beginning with the foundational aspects of the theory, the book presents important concepts and useful techniques, discusses issues of basic interest, and shows how thermal field, linear response, kinetic theories and hydrodynamics emerge. It also illustrates how these concepts are applied to research topics including nonequilibrium phase transitions, thermalization in relativistic heavy ion collisions, the nonequilibrium dynamics of Bose-Einstein condensation, and the generation of structures from quantum fluctuations in the early Universe. This self-contained book is a valuable reference for graduate students and researchers in particle physics, gravitation, cosmology, atomic-optical and condensed matter physics. It has been reissued as an Open Access publication

    EXPLAINABLE FEATURE- AND DECISION-LEVEL FUSION

    Get PDF
    Information fusion is the process of aggregating knowledge from multiple data sources to produce more consistent, accurate, and useful information than any one individual source can provide. In general, there are three primary sources of data/information: humans, algorithms, and sensors. Typically, objective data---e.g., measurements---arise from sensors. Using these data sources, applications such as computer vision and remote sensing have long been applying fusion at different levels (signal, feature, decision, etc.). Furthermore, the daily advancement in engineering technologies like smart cars, which operate in complex and dynamic environments using multiple sensors, are raising both the demand for and complexity of fusion. There is a great need to discover new theories to combine and analyze heterogeneous data arising from one or more sources. The work collected in this dissertation addresses the problem of feature- and decision-level fusion. Specifically, this work focuses on fuzzy choquet integral (ChI)-based data fusion methods. Most mathematical approaches for data fusion have focused on combining inputs relative to the assumption of independence between them. However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. The ChI is a powerful aggregation tool that is capable modeling these interactions. Consider the fusion of m sources, where there are 2m unique subsets (interactions); the ChI is capable of learning the worth of each of these possible source subsets. However, the complexity of fuzzy integral-based methods grows quickly, as the number of trainable parameters for the fusion of m sources scales as 2m. Hence, we require a large amount of training data to avoid the problem of over-fitting. This work addresses the over-fitting problem of ChI-based data fusion with novel regularization strategies. These regularization strategies alleviate the issue of over-fitting while training with limited data and also enable the user to consciously push the learned methods to take a predefined, or perhaps known, structure. Also, the existing methods for training the ChI for decision- and feature-level data fusion involve quadratic programming (QP). The QP-based learning approach for learning ChI-based data fusion solutions has a high space complexity. This has limited the practical application of ChI-based data fusion methods to six or fewer input sources. To address the space complexity issue, this work introduces an online training algorithm for learning ChI. The online method is an iterative gradient descent approach that processes one observation at a time, enabling the applicability of ChI-based data fusion on higher dimensional data sets. In many real-world data fusion applications, it is imperative to have an explanation or interpretation. This may include providing information on what was learned, what is the worth of individual sources, why a decision was reached, what evidence process(es) were used, and what confidence does the system have on its decision. However, most existing machine learning solutions for data fusion are black boxes, e.g., deep learning. In this work, we designed methods and metrics that help with answering these questions of interpretation, and we also developed visualization methods that help users better understand the machine learning solution and its behavior for different instances of data

    Spationomy

    Get PDF
    This open access book is based on "Spationomy – Spatial Exploration of Economic Data", an interdisciplinary and international project in the frame of ERASMUS+ funded by the European Union. The project aims to exchange interdisciplinary knowledge in the fields of economics and geomatics. For the newly introduced courses, interdisciplinary learning materials have been developed by a team of lecturers from four different universities in three countries. In a first study block, students were taught methods from the two main research fields. Afterwards, the knowledge gained had to be applied in a project. For this international project, teams were formed, consisting of one student from each university participating in the project. The achieved results were presented in a summer school a few months later. At this event, more methodological knowledge was imparted to prepare students for a final simulation game about spatial and economic decision making. In a broader sense, the chapters will present the methodological background of the project, give case studies and show how visualisation and the simulation game works

    Optimal uncertainty quantification of a risk measurement from a computer code

    Get PDF
    La quantification des incertitudes lors d'une étude de sûreté peut être réalisée en modélisant les paramètres d'entrée du système physique par des variables aléatoires. Afin de propager les incertitudes affectant les entrées, un modèle de simulation numérique reproduisant la physique du système est exécuté avec différentes combinaisons des paramètres d'entrée, générées suivant leur loi de probabilité jointe. Il est alors possible d'étudier la variabilité de la sortie du code, ou d'estimer certaines quantités d'intérêt spécifiques. Le code étant considéré comme une boîte noire déterministe, la quantité d'intérêt dépend uniquement du choix de la loi de probabilité des entrées. Toutefois, cette distribution de probabilité est elle-même incertaine. En général, elle est choisie grâce aux avis d'experts, qui sont subjectifs et parfois contradictoires, mais aussi grâce à des données expérimentales souvent en nombre insuffisant et entachées d'erreurs. Cette variabilité dans le choix de la distribution se propage jusqu'à la quantité d'intérêt. Cette thèse traite de la prise en compte de cette incertitude dite de deuxième niveau. L'approche proposée, connue sous le nom d'Optimal Uncertainty Quantification (OUQ) consiste à évaluer des bornes sur la quantité d'intérêt. De ce fait on ne considère plus une distribution fixée, mais un ensemble de mesures de probabilité sous contraintes de moments sur lequel la quantité d'intérêt est optimisée. Après avoir exposé des résultats théoriques visant à réduire l'optimisation de la quantité d'intérêt aux point extrémaux de l'espace de mesures de probabilité, nous présentons différentes quantités d'intérêt vérifiant les hypothèses du problème. Cette thèse illustre l'ensemble de la méthodologie sur plusieurs cas d'applications, l'un d'eux étant un cas réel étudiant l'évolution de la température de gaine du combustible nucléaire en cas de perte du réfrigérant.Uncertainty quantification in a safety analysis study can be conducted by considering the uncertain inputs of a physical system as a vector of random variables. The most widespread approach consists in running a computer model reproducing the physical phenomenon with different combinations of inputs in accordance with their probability distribution. Then, one can study the related uncertainty on the output or estimate a specific quantity of interest (QoI). Because the computer model is assumed to be a deterministic black-box function, the QoI only depends on the choice of the input probability measure. It is formally represented as a scalar function defined on a measure space. We propose to gain robustness on the quantification of this QoI. Indeed, the probability distributions characterizing the uncertain input may themselves be uncertain. For instance, contradictory expert opinion may make it difficult to select a single probability distribution, and the lack of information in the input variables affects inevitably the choice of the distribution. As the uncertainty on the input distributions propagates to the QoI, an important consequence is that different choices of input distributions will lead to different values of the QoI. The purpose of this thesis is to account for this second level uncertainty. We propose to evaluate the maximum of the QoI over a space of probability measures, in an approach known as optimal uncertainty quantification (OUQ). Therefore, we do not specify a single precise input distribution, but rather a set of admissible probability measures defined through moment constraints. The QoI is then optimized over this measure space. After exposing theoretical results showing that the optimization domain of the QoI can be reduced to the extreme points of the measure space, we present several interesting quantities of interest satisfying the assumption of the problem. This thesis illustrates the methodology in several application cases, one of them being a real nuclear engineering case that study the evolution of the peak cladding temperature of fuel rods in case of an intermediate break loss of coolant accident

    The calibration of option pricing models

    Get PDF

    Numerical scalar curvature deformation and a gluing construction

    Get PDF
    In this work a new numerical technique to prepare Cauchy data for the initial value problem (IVP) formulation of Einstein's field equations (EFE) is presented. Our method is directly inspired by the exterior asymptotic gluing (EAG) result of Corvino (2000). The argument assumes a moment in time symmetry and allows for a composite, initial data set to be assembled from (a finite subdomain of) a known asymptotically Euclidean initial data set which is glued (in a controlled manner) over a compact spatial region to an exterior Schwarzschildean representative. We demonstrate how (Corvino, 2000) may be directly adapted to a numerical scheme and under the assumption of axisymmetry construct composite Hamiltonian constraint satisfying initial data featuring internal binary black holes (BBH) glued to exterior Schwarzschild initial data in isotropic form. The generality of the method is shown in a comparison of properties of EAG composite initial data sets featuring internal BBHs as modelled by Brill-Lindquist and Misner data. The underlying geometric analysis character of gluing methods requires work within suitably weighted function spaces, which, together with a technical impediment preventing (Corvino, 2000) from being fully constructive, is the principal difficulty in devising a numerical technique. Thus the single previous attempt by Giulini and Holzegel (2005) (recently implemented by Doulis and Rinne (2016)) sought to avoid this by embedding the result within the well known Lichnerowicz-York conformal framework which required ad-hoc assumptions on solution form and a formal perturbative argument to show that EAG may proceed. In (Giulini and Holzegel, 2005) it was further claimed that judicious engineering of EAG can serve to reduce the presence of spurious gravitational radiation - unfortunately, in line with the general conclusion of (Doulis and Rinne, 2016) our numerical investigation does not appear to indicate that this is the case. Concretising the sought initial data to be specified with respect to a spatial manifold with underlying topology R×S² our method exploits a variety of pseudo-spectral (PS) techniques. A combination of the eth-formalism and spin-weighted spherical harmonics together with a novel complex-analytic based numerical approach is utilised. This is enabled by our Python 3 based numerical toolkit allowing for unified just-in-time compiled, distributed calculations with seamless extension to arbitrary precision for problems involving generic, geometric partial differential equations (PDE) as specified by tensorial expressions. Additional features include a layer of abstraction that allows for automatic reduction of indicial (i.e., tensorial) expressions together with grid remapping based on chart specification - hence straight-forward implementation of IVP formulations of the EFE such as ADM-York or ADM-York-NOR is possible. Code-base verification is performed by evolving the polarised Gowdy T³ space-time with the above formulations utilising high order, explicit time-integrators in the method of lines approach as combined with PS techniques. As the initial data we prepare has a precise (Schwarzschild) exterior this may be of interest to global evolution schemes that incorporate information from spatial-infinity. Furthermore, our approach may shed light on how more general gluing techniques could potentially be adapted for numerical work. The code-base we have developed may also be of interest in application to other problems involving geometric PDEs

    Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks

    Full text link
    Information fusion is an essential part of numerous engineering systems and biological functions, e.g., human cognition. Fusion occurs at many levels, ranging from the low-level combination of signals to the high-level aggregation of heterogeneous decision-making processes. While the last decade has witnessed an explosion of research in deep learning, fusion in neural networks has not observed the same revolution. Specifically, most neural fusion approaches are ad hoc, are not understood, are distributed versus localized, and/or explainability is low (if present at all). Herein, we prove that the fuzzy Choquet integral (ChI), a powerful nonlinear aggregation function, can be represented as a multi-layer network, referred to hereafter as ChIMP. We also put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient descent-based optimization in light of the exponential number of ChI inequality constraints. An additional benefit of ChIMP/iChIMP is that it enables eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP is applied to the fusion of a set of heterogeneous architecture deep models in remote sensing. We show an improvement in model accuracy and our previously established XAI indices shed light on the quality of our data, model, and its decisions.Comment: IEEE Transactions on Fuzzy System
    • …
    corecore