12 research outputs found
Towards computational Morse-Floer homology: forcing results for connecting orbits by computing relative indices of critical points
To make progress towards better computability of Morse-Floer homology, and
thus enhance the applicability of Floer theory, it is essential to have tools
to determine the relative index of equilibria. Since even the existence of
nontrivial stationary points is often difficult to accomplish, extracting their
index information is usually out of reach. In this paper we establish a
computer-assisted proof approach to determining relative indices of stationary
states. We introduce the general framework and then focus on three example
problems described by partial differential equations to show how these ideas
work in practice. Based on a rigorous implementation, with accompanying code
made available, we determine the relative indices of many stationary points.
Moreover, we show how forcing results can be then used to prove theorems about
connecting orbits and traveling waves in partial differential equations.Comment: 30 pages, 4 figures. Revised accepted versio
On the McKean-Vlasov dynamics with or without common noise
McKean-Vlasov stochastic differential equations may arise from a probabilistic interpretation of certain non-linear PDEs or as the limiting behaviour of mean field particle
systems (those whose interactions are through the empirical measure) as the population size increases to infinity. Interest in this topic has grown enormously in recent
times following the introduction of the related mean field games. These are models
derived from the infinite population limit of games with finitely many players and
mean field structure, i.e. the dynamics and rewards of one player depend on the other
players through the empirical measure. Naturally, it is imperative that the dynamics
of the models are well-posed. This question comprises the majority of this text in two
stochastic contexts: with or without a common noise.
In the more often studied case where the particles are driven by independent Brownian motions, results are provided that pertain to the weak-existence and pathwise
continuous dependence on the initial condition. These results adapt a method of
Gyöngy and Krylov for Itô's stochastic differential equations to the McKean-Vlasov
setting. Should the coefficients and initial distribution satisfy a certain Lyapunov condition, well-posedness of the dynamics may be established along with the existence
of an invariant measure for an associated semi-group. These conditions allow for
potentially unbounded coefficients, with growth intrinsically linked to the Lyapunov
condition.
In the second context, particle systems driven by correlated noises are considered.
In particular, the particles are each driven by two Brownian motions: one common to
all particles and a private Brownian motion independent of all others. The connection
between these particle systems and related McKean-Vlasov models through the conditional propagation of chaos is discussed. Existence and uniqueness of weak solutions
to the corresponding McKean-Vlasov dynamics is proved in a particular framework
that allows for a discontinuous drift coefficient at a price of non-degenerate noise
Convex Optimization for Machine Learning
This book covers an introduction to convex optimization, one of the powerful and tractable optimization problems that can be efficiently solved on a computer. The goal of the book is to
help develop a sense of what convex optimization is, and how it can be used in a widening array of practical contexts with a particular emphasis on machine learning.
The first part of the book covers core concepts of convex sets, convex functions, and related basic definitions that serve understanding convex optimization and its corresponding models. The second part deals with one very useful theory, called duality, which enables us to: (1) gain algorithmic insights; and (2) obtain an approximate solution to non-convex optimization problems which are often difficult to solve. The last part focuses on modern applications in machine learning and deep learning.
A defining feature of this book is that it succinctly relates the “story” of how convex optimization plays a role, via historical examples and trending machine learning applications. Another key feature is that it includes programming implementation of a variety of machine learning algorithms inspired by optimization fundamentals, together with a brief tutorial of the used programming tools. The implementation is based on Python, CVXPY, and TensorFlow.
This book does not follow a traditional textbook-style organization, but is streamlined via a series of lecture notes that are intimately related, centered around coherent themes and concepts. It serves as a textbook mainly for a senior-level undergraduate course, yet is also suitable for a first-year graduate course. Readers benefit from having a good background in linear algebra, some exposure to probability, and basic familiarity with Python
Convex Optimization for Machine Learning
This book covers an introduction to convex optimization, one of the powerful and tractable optimization problems that can be efficiently solved on a computer. The goal of the book is to
help develop a sense of what convex optimization is, and how it can be used in a widening array of practical contexts with a particular emphasis on machine learning.
The first part of the book covers core concepts of convex sets, convex functions, and related basic definitions that serve understanding convex optimization and its corresponding models. The second part deals with one very useful theory, called duality, which enables us to: (1) gain algorithmic insights; and (2) obtain an approximate solution to non-convex optimization problems which are often difficult to solve. The last part focuses on modern applications in machine learning and deep learning.
A defining feature of this book is that it succinctly relates the “story” of how convex optimization plays a role, via historical examples and trending machine learning applications. Another key feature is that it includes programming implementation of a variety of machine learning algorithms inspired by optimization fundamentals, together with a brief tutorial of the used programming tools. The implementation is based on Python, CVXPY, and TensorFlow.
This book does not follow a traditional textbook-style organization, but is streamlined via a series of lecture notes that are intimately related, centered around coherent themes and concepts. It serves as a textbook mainly for a senior-level undergraduate course, yet is also suitable for a first-year graduate course. Readers benefit from having a good background in linear algebra, some exposure to probability, and basic familiarity with Python
Foundations of Software Science and Computation Structures
This open access book constitutes the proceedings of the 24th International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2021, which was held during March 27 until April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The 28 regular papers presented in this volume were carefully reviewed and selected from 88 submissions. They deal with research on theories and methods to support the analysis, integration, synthesis, transformation, and verification of programs and software systems
Coloured Noise in Langevin Simulations of Superparamagnetic Nanoparticles
The coloured noise formalism has long formed an important generalisation of the white
noise limit assumed in many Langevin equations. The Langevin equation most typically
applied to magnetic systems, namely the Landau-Lifshitz-Gilbert (LLG) equation makes
use of the white noise approximation. The correct extension of the LLG model to the
coloured noise is the Landau-Lifshitz-Miyazaki-Seki pair of Langevin equations. This pair
of Langevin equations correctly incorporates a correlated damping term into the equa-
tion of motion, constituting a realisation of the Fluctuation-Dissipation theorem for the
coloured noise in the magnetic system.
We undertake numerical investigation of the properties of systems of noninteracting
magnetic moments evolving under the LLMS model. In particular, we apply the model
to superparamagnetic spins. We investigate the escape rate for such spins and find that
departure from uncorrelated behaviour occurs as the system time approaches the bath
correlation time, and we see that the relevant system time for the superparamagnetic par-
ticles is the Larmor precession time at the bottom of the well, leading us to conclude that
materials with higher magnetic anisotropy constitute better candidates for the exhibition
of non-Markovian properties.
We also model non-Markovian spin dynamics by modifying the commonly used dis-
crete orientation approximation from a Markovian rate equation to a Generalised Master
Equation (GME), where the interwell transition rates are promoted to memory kernels.
This model makes the qualitative prediction of a frequency-dependent diamagnetic sus-
ceptibility, as well as a biexponential decay profile of the magnetisation. The predictions
of the GME are compared to the results of LLMS simulations, where we find a similar
diamagnetic phase transition and biexponential behaviour