2,797 research outputs found
Numerical computation of rare events via large deviation theory
An overview of rare events algorithms based on large deviation theory (LDT)
is presented. It covers a range of numerical schemes to compute the large
deviation minimizer in various setups, and discusses best practices, common
pitfalls, and implementation trade-offs. Generalizations, extensions, and
improvements of the minimum action methods are proposed. These algorithms are
tested on example problems which illustrate several common difficulties which
arise e.g. when the forcing is degenerate or multiplicative, or the systems are
infinite-dimensional. Generalizations to processes driven by non-Gaussian
noises or random initial data and parameters are also discussed, along with the
connection between the LDT-based approach reviewed here and other methods, such
as stochastic field theory and optimal control. Finally, the integration of
this approach in importance sampling methods using e.g. genealogical algorithms
is explored
Generalized Shortest Path Kernel on Graphs
We consider the problem of classifying graphs using graph kernels. We define
a new graph kernel, called the generalized shortest path kernel, based on the
number and length of shortest paths between nodes. For our example
classification problem, we consider the task of classifying random graphs from
two well-known families, by the number of clusters they contain. We verify
empirically that the generalized shortest path kernel outperforms the original
shortest path kernel on a number of datasets. We give a theoretical analysis
for explaining our experimental results. In particular, we estimate
distributions of the expected feature vectors for the shortest path kernel and
the generalized shortest path kernel, and we show some evidence explaining why
our graph kernel outperforms the shortest path kernel for our graph
classification problem.Comment: Short version presented at Discovery Science 2015 in Banf
Non-meanfield deterministic limits in chemical reaction kinetics far from equilibrium
A general mechanism is proposed by which small intrinsic fluctuations in a
system far from equilibrium can result in nearly deterministic dynamical
behaviors which are markedly distinct from those realized in the meanfield
limit. The mechanism is demonstrated for the kinetic Monte-Carlo version of the
Schnakenberg reaction where we identified a scaling limit in which the global
deterministic bifurcation picture is fundamentally altered by fluctuations.
Numerical simulations of the model are found to be in quantitative agreement
with theoretical predictions.Comment: 4 pages, 4 figures (submitted to Phys. Rev. Lett.
Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates
In this paper, we provide a novel construction of the linear-sized spectral
sparsifiers of Batson, Spielman and Srivastava [BSS14]. While previous
constructions required running time [BSS14, Zou12], our
sparsification routine can be implemented in almost-quadratic running time
.
The fundamental conceptual novelty of our work is the leveraging of a strong
connection between sparsification and a regret minimization problem over
density matrices. This connection was known to provide an interpretation of the
randomized sparsifiers of Spielman and Srivastava [SS11] via the application of
matrix multiplicative weight updates (MWU) [CHS11, Vis14]. In this paper, we
explain how matrix MWU naturally arises as an instance of the
Follow-the-Regularized-Leader framework and generalize this approach to yield a
larger class of updates. This new class allows us to accelerate the
construction of linear-sized spectral sparsifiers, and give novel insights on
the motivation behind Batson, Spielman and Srivastava [BSS14]
New results in rho^0 meson physics
We compare the predictions of a range of existing models based on the Vector
Meson Dominance hypothesis with data on e^+ e^- -> pi^+ pi^$ and e^+ e^- ->
mu^+ mu^- cross-sections and the phase and near-threshold behavior of the
timelike pion form factor, with the aim of determining which (if any) of these
models is capable of providing an accurate representation of the full range of
experimental data. We find that, of the models considered, only that proposed
by Bando et al. is able to consistently account for all information, provided
one allows its parameter "a" to vary from the usual value of 2 to 2.4. Our fit
with this model gives a point-like coupling (gamma pi^+ \pi^-) of magnitude ~
-e/6, while the common formulation of VMD excludes such a term. The resulting
values for the rho mass and pi^+ pi^- and e^+e^- partial widths as well as the
branching ratio for the decay omega -> pi^+ pi^- obtained within the context of
this model are consistent with previous results.Comment: 34 pages with 7 figures. Published version also available at
http://link.springer.de/link/service/journals/10052/tocs/t8002002.ht
The Computational Power of Optimization in Online Learning
We consider the fundamental problem of prediction with expert advice where
the experts are "optimizable": there is a black-box optimization oracle that
can be used to compute, in constant time, the leading expert in retrospect at
any point in time. In this setting, we give a novel online algorithm that
attains vanishing regret with respect to experts in total
computation time. We also give a lower bound showing
that this running time cannot be improved (up to log factors) in the oracle
model, thereby exhibiting a quadratic speedup as compared to the standard,
oracle-free setting where the required time for vanishing regret is
. These results demonstrate an exponential gap between
the power of optimization in online learning and its power in statistical
learning: in the latter, an optimization oracle---i.e., an efficient empirical
risk minimizer---allows to learn a finite hypothesis class of size in time
. We also study the implications of our results to learning in
repeated zero-sum games, in a setting where the players have access to oracles
that compute, in constant time, their best-response to any mixed strategy of
their opponent. We show that the runtime required for approximating the minimax
value of the game in this setting is , yielding
again a quadratic improvement upon the oracle-free setting, where
is known to be tight
Premise Selection for Mathematics by Corpus Analysis and Kernel Methods
Smart premise selection is essential when using automated reasoning as a tool
for large-theory formal proof development. A good method for premise selection
in complex mathematical libraries is the application of machine learning to
large corpora of proofs. This work develops learning-based premise selection in
two ways. First, a newly available minimal dependency analysis of existing
high-level formal mathematical proofs is used to build a large knowledge base
of proof dependencies, providing precise data for ATP-based re-verification and
for training premise selection algorithms. Second, a new machine learning
algorithm for premise selection based on kernel methods is proposed and
implemented. To evaluate the impact of both techniques, a benchmark consisting
of 2078 large-theory mathematical problems is constructed,extending the older
MPTP Challenge benchmark. The combined effect of the techniques results in a
50% improvement on the benchmark over the Vampire/SInE state-of-the-art system
for automated reasoning in large theories.Comment: 26 page
- …
