2,158 research outputs found
Adversarial Variational Optimization of Non-Differentiable Simulators
Complex computer simulators are increasingly used across fields of science as
generative models tying parameters of an underlying theory to experimental
observations. Inference in this setup is often difficult, as simulators rarely
admit a tractable density or likelihood function. We introduce Adversarial
Variational Optimization (AVO), a likelihood-free inference algorithm for
fitting a non-differentiable generative model incorporating ideas from
generative adversarial networks, variational optimization and empirical Bayes.
We adapt the training procedure of generative adversarial networks by replacing
the differentiable generative network with a domain-specific simulator. We
solve the resulting non-differentiable minimax problem by minimizing
variational upper bounds of the two adversarial objectives. Effectively, the
procedure results in learning a proposal distribution over simulator
parameters, such that the JS divergence between the marginal distribution of
the synthetic data and the empirical distribution of observed data is
minimized. We evaluate and compare the method with simulators producing both
discrete and continuous data.Comment: v4: Final version published at AISTATS 2019; v5: Fixed typo in Eqn 1
Variational optimization with infinite projected entangled-pair states
We present a scheme to perform an iterative variational optimization with
infinite projected entangled-pair states (iPEPS), a tensor network ansatz for a
two-dimensional wave function in the thermodynamic limit, to compute the ground
state of a local Hamiltonian. The method is based on a systematic summation of
Hamiltonian contributions using the corner transfer-matrix method. Benchmark
results for challenging problems are presented, including the 2D Heisenberg
model, the Shastry-Sutherland model, and the t-J model, which show that the
variational scheme yields considerably more accurate results than the
previously best imaginary time evolution algorithm, with a similar
computational cost and with a faster convergence towards the ground state.Comment: 11 pages, 9 figures, revised (published) version, correction in Fig.
Study of implosion in an attractive Bose-Einstein condensate
By solving the Gross-Pitaevskii equation analytically and numerically, we
reexamine the implosion phenomena that occur beyond the critical value of the
number of atoms of an attractive Bose-Einstein condensate (BEC) with
cigar-shape trapping geometry. We theoretically calculate the critical number
of atoms in the condensate by using Ritz's variational optimization technique
and investigate the stability and collapse dynamics of the attractive BEC by
numerically solving the time dependent Gross-Pitavskii equation
Probabilistic Adaptive Computation Time
We present a probabilistic model with discrete latent variables that control
the computation time in deep learning models such as ResNets and LSTMs. A prior
on the latent variables expresses the preference for faster computation. The
amount of computation for an input is determined via amortized maximum a
posteriori (MAP) inference. MAP inference is performed using a novel stochastic
variational optimization method. The recently proposed Adaptive Computation
Time mechanism can be seen as an ad-hoc relaxation of this model. We
demonstrate training using the general-purpose Concrete relaxation of discrete
variables. Evaluation on ResNet shows that our method matches the
speed-accuracy trade-off of Adaptive Computation Time, while allowing for
evaluation with a simple deterministic procedure that has a lower memory
footprint
- …