162 research outputs found
Ensemble Kalman filter for neural network based one-shot inversion
We study the use of novel techniques arising in machine learning for inverse
problems. Our approach replaces the complex forward model by a neural network,
which is trained simultaneously in a one-shot sense when estimating the unknown
parameters from data, i.e. the neural network is trained only for the unknown
parameter. By establishing a link to the Bayesian approach to inverse problems,
an algorithmic framework is developed which ensures the feasibility of the
parameter estimate w.r. to the forward model. We propose an efficient,
derivative-free optimization method based on variants of the ensemble Kalman
inversion. Numerical experiments show that the ensemble Kalman filter for
neural network based one-shot inversion is a promising direction combining
optimization and machine learning techniques for inverse problems
Ensemble Feedback Stabilization of Linear Systems
Stabilization of linear control systems with parameter-dependent system
matrices is investigated. A Riccati based feedback mechanism is proposed and
analyzed. It is constructed by means of an ensemble of parameters from a
training set. This single feedback stabilizes all systems of the training set
and also systems in its vicinity. Moreover its suboptimality with respect to
optimal feedback for each single parameter from the training set can be
quantified
A quasi-Monte Carlo Method for an Optimal Control Problem Under Uncertainty
We study an optimal control problem under uncertainty, where the target
function is the solution of an elliptic partial differential equation with
random coefficients, steered by a control function. The robust formulation of
the optimization problem is stated as a high-dimensional integration problem
over the stochastic variables. It is well known that carrying out a
high-dimensional numerical integration of this kind using a Monte Carlo method
has a notoriously slow convergence rate; meanwhile, a faster rate of
convergence can potentially be obtained by using sparse grid quadratures, but
these lead to discretized systems that are non-convex due to the involvement of
negative quadrature weights. In this paper, we analyze instead the application
of a quasi-Monte Carlo method, which retains the desirable convexity structure
of the system and has a faster convergence rate compared to ordinary Monte
Carlo methods. In particular, we show that under moderate assumptions on the
decay of the input random field, the error rate obtained by using a specially
designed, randomly shifted rank-1 lattice quadrature rule is essentially
inversely proportional to the number of quadrature nodes. The overall
discretization error of the problem, consisting of the dimension truncation
error, finite element discretization error and quasi-Monte Carlo quadrature
error, is derived in detail. We assess the theoretical findings in numerical
experiments
Parabolic PDE-constrained optimal control under uncertainty with entropic risk measure using quasi-Monte Carlo integration
We study the application of a tailored quasi-Monte Carlo (QMC) method to a
class of optimal control problems subject to parabolic partial differential
equation (PDE) constraints under uncertainty: the state in our setting is the
solution of a parabolic PDE with a random thermal diffusion coefficient,
steered by a control function. To account for the presence of uncertainty in
the optimal control problem, the objective function is composed with a risk
measure. We focus on two risk measures, both involving high-dimensional
integrals over the stochastic variables: the expected value and the (nonlinear)
entropic risk measure. The high-dimensional integrals are computed numerically
using specially designed QMC methods and, under moderate assumptions on the
input random field, the error rate is shown to be essentially linear,
independently of the stochastic dimension of the problem -- and thereby
superior to ordinary Monte Carlo methods. Numerical results demonstrate the
effectiveness of our method
Elite Influence? Religion, Economics, and the Rise of the Nazis
Adolf Hitler's seizure of power was one of the most consequential events of the twentieth century. Yet, our understanding of which factors fueled the astonishing rise of the Nazis remains highly incomplete. This paper shows that religion played an important role in the Nazi party's electoral success -- dwarfing all available socioeconomic variables. To obtain the first causal estimates we exploit plausibly exogenous variation in the geographic distribution of Catholics and Protestants due to a peace treaty in the sixteenth century. Even after allowing for sizeable violations of the exclusion restriction, the evidence indicates that Catholics were significantly less likely to vote for the Nazi Party than Protestants. Consistent with the historical record, our results are most naturally rationalized by a model in which the Catholic Church leaned on believers to vote for the democratic Zentrum Party, whereas the Protestant Church remained politically neutral
Ensemble Kalman filter for neural network based one-shot inversion
We study the use of novel techniques arising in machine learning for inverse problems. Our approach replaces the complex forward model by a neural network, which is trained simultaneously in a one-shot sense when estimating the unknown parameters from data, i.e. the neural network is trained only for the unknown parameter. By establishing a link to the Bayesian approach to inverse problems, an algorithmic framework is developed which ensures the feasibility of the parameter estimate w.r. to the forward model. We propose an efficient, derivative-free optimization method based on variants of the ensemble Kalman inversion. Numerical experiments show that the ensemble Kalman filter for neural network based one-shot inversion is a promising direction combining optimization and machine learning techniques for inverse problems
A General Framework for Machine Learning based Optimization Under Uncertainty
We propose a general framework for machine learning based optimization under uncertainty. Our approach replaces the complex forward model by a surrogate, e.g., a neural network, which is learned simultaneously in a one-shot sense when solving the optimal control problem. Our approach relies on a reformulation of the problem as a penalized empirical risk minimization problem for which we provide a consistency analysis in terms of large data and increasing penalty parameter. To solve the resulting problem, we suggest a stochastic gradient method with adaptive control of the penalty parameter and prove convergence under suitable assumptions on the surrogate model. Numerical experiments illustrate the results for linear and nonlinear surrogate models
A Quasi-Monte Carlo Method for Optimal Control Under Uncertainty
We study an optimal control problem under uncertainty, where the target function is the solution of an elliptic partial differential equation with random coefficients, steered by a control function. The robust formulation of the optimization problem is stated as a high-dimensional integration problem over the stochastic variables. It is well known that carrying out a high-dimensional numerical integration of this kind using a Monte Carlo method has a notoriously slow convergence rate; meanwhile, a faster rate of convergence can potentially be obtained by using sparse grid quadratures, but these lead to discretized systems that are nonconvex due to the involvement of negative quadrature weights. In this paper, we analyze instead the application of a quasi-Monte Carlo method, which retains the desirable convexity structure of the system and has a faster convergence rate compared to ordinary Monte Carlo methods. In particular, we show that under moderate assumptions on the decay of the input random field, the error rate obtained by using a specially designed, randomly shifted rank-1 lattice quadrature rule is essentially inversely proportional to the number of quadrature nodes. The overall discretization error of the problem, consisting of the dimension truncation error, finite element discretization error, and quasi-Monte Carlo quadrature error, is derived in detail. We assess the theoretical findings in numerical experiments
- …