7,149 research outputs found
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
In this paper we propose a new class of coupling methods for the sensitivity
analysis of high dimensional stochastic systems and in particular for lattice
Kinetic Monte Carlo. Sensitivity analysis for stochastic systems is typically
based on approximating continuous derivatives with respect to model parameters
by the mean value of samples from a finite difference scheme. Instead of using
independent samples the proposed algorithm reduces the variance of the
estimator by developing a strongly correlated-"coupled"- stochastic process for
both the perturbed and unperturbed stochastic processes, defined in a common
state space. The novelty of our construction is that the new coupled process
depends on the targeted observables, e.g. coverage, Hamiltonian, spatial
correlations, surface roughness, etc., hence we refer to the proposed method as
em goal-oriented sensitivity analysis. In particular, the rates of the coupled
Continuous Time Markov Chain are obtained as solutions to a goal-oriented
optimization problem, depending on the observable of interest, by considering
the minimization functional of the corresponding variance. We show that this
functional can be used as a diagnostic tool for the design and evaluation of
different classes of couplings. Furthermore the resulting KMC sensitivity
algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz
algorithm's philosophy, where here events are divided in classes depending on
level sets of the observable of interest. Finally, we demonstrate in several
examples including adsorption, desorption and diffusion Kinetic Monte Carlo
that for the same confidence interval and observable, the proposed
goal-oriented algorithm can be two orders of magnitude faster than existing
coupling algorithms for spatial KMC such as the Common Random Number approach
ES Is More Than Just a Traditional Finite-Difference Approximator
An evolution strategy (ES) variant based on a simplification of a natural
evolution strategy recently attracted attention because it performs
surprisingly well in challenging deep reinforcement learning domains. It
searches for neural network parameters by generating perturbations to the
current set of parameters, checking their performance, and moving in the
aggregate direction of higher reward. Because it resembles a traditional
finite-difference approximation of the reward gradient, it can naturally be
confused with one. However, this ES optimizes for a different gradient than
just reward: It optimizes for the average reward of the entire population,
thereby seeking parameters that are robust to perturbation. This difference can
channel ES into distinct areas of the search space relative to gradient
descent, and also consequently to networks with distinct properties. This
unique robustness-seeking property, and its consequences for optimization, are
demonstrated in several domains. They include humanoid locomotion, where
networks from policy gradient-based reinforcement learning are significantly
less robust to parameter perturbation than ES-based policies solving the same
task. While the implications of such robustness and robustness-seeking remain
open to further study, this work's main contribution is to highlight such
differences and their potential importance
- …