12,162 research outputs found
Parameter Identification in a Probabilistic Setting
Parameter identification problems are formulated in a probabilistic language,
where the randomness reflects the uncertainty about the knowledge of the true
values. This setting allows conceptually easily to incorporate new information,
e.g. through a measurement, by connecting it to Bayes's theorem. The unknown
quantity is modelled as a (may be high-dimensional) random variable. Such a
description has two constituents, the measurable function and the measure. One
group of methods is identified as updating the measure, the other group changes
the measurable function. We connect both groups with the relatively recent
methods of functional approximation of stochastic problems, and introduce
especially in combination with the second group of methods a new procedure
which does not need any sampling, hence works completely deterministically. It
also seems to be the fastest and more reliable when compared with other
methods. We show by example that it also works for highly nonlinear non-smooth
problems with non-Gaussian measures.Comment: 29 pages, 16 figure
Coordinate Transformation and Polynomial Chaos for the Bayesian Inference of a Gaussian Process with Parametrized Prior Covariance Function
This paper addresses model dimensionality reduction for Bayesian inference
based on prior Gaussian fields with uncertainty in the covariance function
hyper-parameters. The dimensionality reduction is traditionally achieved using
the Karhunen-\Loeve expansion of a prior Gaussian process assuming covariance
function with fixed hyper-parameters, despite the fact that these are uncertain
in nature. The posterior distribution of the Karhunen-Lo\`{e}ve coordinates is
then inferred using available observations. The resulting inferred field is
therefore dependent on the assumed hyper-parameters. Here, we seek to
efficiently estimate both the field and covariance hyper-parameters using
Bayesian inference. To this end, a generalized Karhunen-Lo\`{e}ve expansion is
derived using a coordinate transformation to account for the dependence with
respect to the covariance hyper-parameters. Polynomial Chaos expansions are
employed for the acceleration of the Bayesian inference using similar
coordinate transformations, enabling us to avoid expanding explicitly the
solution dependence on the uncertain hyper-parameters. We demonstrate the
feasibility of the proposed method on a transient diffusion equation by
inferring spatially-varying log-diffusivity fields from noisy data. The
inferred profiles were found closer to the true profiles when including the
hyper-parameters' uncertainty in the inference formulation.Comment: 34 pages, 17 figure
Uncertainty Quantification of geochemical and mechanical compaction in layered sedimentary basins
In this work we propose an Uncertainty Quantification methodology for
sedimentary basins evolution under mechanical and geochemical compaction
processes, which we model as a coupled, time-dependent, non-linear,
monodimensional (depth-only) system of PDEs with uncertain parameters. While in
previous works (Formaggia et al. 2013, Porta et al., 2014) we assumed a
simplified depositional history with only one material, in this work we
consider multi-layered basins, in which each layer is characterized by a
different material, and hence by different properties. This setting requires
several improvements with respect to our earlier works, both concerning the
deterministic solver and the stochastic discretization. On the deterministic
side, we replace the previous fixed-point iterative solver with a more
efficient Newton solver at each step of the time-discretization. On the
stochastic side, the multi-layered structure gives rise to discontinuities in
the dependence of the state variables on the uncertain parameters, that need an
appropriate treatment for surrogate modeling techniques, such as sparse grids,
to be effective. We propose an innovative methodology to this end which relies
on a change of coordinate system to align the discontinuities of the target
function within the random parameter space. The reference coordinate system is
built upon exploiting physical features of the problem at hand. We employ the
locations of material interfaces, which display a smooth dependence on the
random parameters and are therefore amenable to sparse grid polynomial
approximations. We showcase the capabilities of our numerical methodologies
through two synthetic test cases. In particular, we show that our methodology
reproduces with high accuracy multi-modal probability density functions
displayed by target state variables (e.g., porosity).Comment: 25 pages, 30 figure
EMISSION ABATEMENT VERSUS DEVELOPMENT AS STRATEGIES TO REDUCE VULNERABILITY TO CLIMATE CHANGE: AN APPLICATION OF FUND
Poorer countries are generally believed to be more vulnerable to climate change than richer countries because poorer countries are more exposed and have less adaptive capacity. This suggests that, in principle, there are two ways of reducing vulnerability to climate change: economic growth and greenhouse gas emission reduction. Using a complex climate change impact model, in which development is an important determinant of vulnerability, the hypothesis is tested whether development aid is more effective in reducing impacts than is emission abatement. The hypothesis is barely rejected for Asia but strongly accepted for Latin America and, particularly, Africa. The explanation for the difference is that development (aid) reduces vulnerabilities in some sectors (infectious diseases, water resources, agriculture) but increases vulnerabilities in others (cardiovascular diseases, energy consumption). However, climate change impacts are much higher in Latin America and Africa than in Asia, so that money spent on emission reduction for the sake of avoiding impacts in developing countries is better spent on vulnerability reduction in those countries.climate change, climate change impacts, vulnerability, adaptive capacity, development
The arrow of time and the nature of spacetime
This paper extends the work of a previous paper [arXiv:1208.2611] on the flow
of time, to consider the origin of the arrow of time. It proposes that a `past
condition' cascades down from cosmological to micro scales, being realized in
many microstructures and setting the arrow of time at the quantum level by
top-down causation. This physics arrow of time then propagates up, through
underlying emergence of higher level structures, to geology, astronomy,
engineering, and biology. The appropriate space-time picture to view all this
is an emergent block universe (`EBU'), that recognizes the way the present is
different from both the past and the future. This essential difference is the
ultimate reason the arrow of time has to be the way it is.Comment: 56 pages, 7 figure
Recommended from our members
Numerical Methods for PDE Constrained Optimization with Uncertain Data
Optimization problems governed by partial differential equations (PDEs) arise in many applications in the form of optimal control, optimal design, or parameter identification problems. In most applications, parameters in the governing PDEs are not deterministic, but rather have to be modeled as random variables or, more generally, as random fields. It is crucial to capture and quantify the uncertainty in such problems rather than to simply replace the uncertain coefficients with their mean values. However, treating the uncertainty adequately and in a computationally tractable manner poses many mathematical challenges. The numerical solution of optimization problems governed by stochastic PDEs builds on mathematical subareas, which so far have been largely investigated in separate communities: Stochastic Programming, Numerical Solution of Stochastic PDEs, and PDE Constrained Optimization.
The workshop achieved an impulse towards cross-fertilization of those disciplines which also was the subject of several scientific discussions. It is to be expected that future exchange of ideas between these areas will give rise to new insights and powerful new numerical methods
- âŠ