6,442 research outputs found
Uncertainty quantification for kinetic models in socio-economic and life sciences
Kinetic equations play a major rule in modeling large systems of interacting
particles. Recently the legacy of classical kinetic theory found novel
applications in socio-economic and life sciences, where processes characterized
by large groups of agents exhibit spontaneous emergence of social structures.
Well-known examples are the formation of clusters in opinion dynamics, the
appearance of inequalities in wealth distributions, flocking and milling
behaviors in swarming models, synchronization phenomena in biological systems
and lane formation in pedestrian traffic. The construction of kinetic models
describing the above processes, however, has to face the difficulty of the lack
of fundamental principles since physical forces are replaced by empirical
social forces. These empirical forces are typically constructed with the aim to
reproduce qualitatively the observed system behaviors, like the emergence of
social structures, and are at best known in terms of statistical information of
the modeling parameters. For this reason the presence of random inputs
characterizing the parameters uncertainty should be considered as an essential
feature in the modeling process. In this survey we introduce several examples
of such kinetic models, that are mathematically described by nonlinear Vlasov
and Fokker--Planck equations, and present different numerical approaches for
uncertainty quantification which preserve the main features of the kinetic
solution.Comment: To appear in "Uncertainty Quantification for Hyperbolic and Kinetic
Equations
Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities
This monograph presents a unified treatment of single- and multi-user
problems in Shannon's information theory where we depart from the requirement
that the error probability decays asymptotically in the blocklength. Instead,
the error probabilities for various problems are bounded above by a
non-vanishing constant and the spotlight is shone on achievable coding rates as
functions of the growing blocklengths. This represents the study of asymptotic
estimates with non-vanishing error probabilities.
In Part I, after reviewing the fundamentals of information theory, we discuss
Strassen's seminal result for binary hypothesis testing where the type-I error
probability is non-vanishing and the rate of decay of the type-II error
probability with growing number of independent observations is characterized.
In Part II, we use this basic hypothesis testing result to develop second- and
sometimes, even third-order asymptotic expansions for point-to-point
communication. Finally in Part III, we consider network information theory
problems for which the second-order asymptotics are known. These problems
include some classes of channels with random state, the multiple-encoder
distributed lossless source coding (Slepian-Wolf) problem and special cases of
the Gaussian interference and multiple-access channels. Finally, we discuss
avenues for further research.Comment: Further comments welcom
The large deviation approach to statistical mechanics
The theory of large deviations is concerned with the exponential decay of
probabilities of large fluctuations in random systems. These probabilities are
important in many fields of study, including statistics, finance, and
engineering, as they often yield valuable information about the large
fluctuations of a random system around its most probable state or trajectory.
In the context of equilibrium statistical mechanics, the theory of large
deviations provides exponential-order estimates of probabilities that refine
and generalize Einstein's theory of fluctuations. This review explores this and
other connections between large deviation theory and statistical mechanics, in
an effort to show that the mathematical language of statistical mechanics is
the language of large deviation theory. The first part of the review presents
the basics of large deviation theory, and works out many of its classical
applications related to sums of random variables and Markov processes. The
second part goes through many problems and results of statistical mechanics,
and shows how these can be formulated and derived within the context of large
deviation theory. The problems and results treated cover a wide range of
physical systems, including equilibrium many-particle systems, noise-perturbed
dynamics, nonequilibrium systems, as well as multifractals, disordered systems,
and chaotic systems. This review also covers many fundamental aspects of
statistical mechanics, such as the derivation of variational principles
characterizing equilibrium and nonequilibrium states, the breaking of the
Legendre transform for nonconcave entropies, and the characterization of
nonequilibrium fluctuations through fluctuation relations.Comment: v1: 89 pages, 18 figures, pdflatex. v2: 95 pages, 20 figures, text,
figures and appendices added, many references cut, close to published versio
Patterns of Scalable Bayesian Inference
Datasets are growing not just in size but in complexity, creating a demand
for rich models and quantification of uncertainty. Bayesian methods are an
excellent fit for this demand, but scaling Bayesian inference is a challenge.
In response to this challenge, there has been considerable recent work based on
varying assumptions about model structure, underlying computational resources,
and the importance of asymptotic correctness. As a result, there is a zoo of
ideas with few clear overarching principles.
In this paper, we seek to identify unifying principles, patterns, and
intuitions for scaling Bayesian inference. We review existing work on utilizing
modern computing resources with both MCMC and variational approximation
techniques. From this taxonomy of ideas, we characterize the general principles
that have proven successful for designing scalable inference procedures and
comment on the path forward
Large-deviation principles for connectable receivers in wireless networks
We study large-deviation principles for a model of wireless networks
consisting of Poisson point processes of transmitters and receivers,
respectively. To each transmitter we associate a family of connectable
receivers whose signal-to-interference-and-noise ratio is larger than a certain
connectivity threshold. First, we show a large-deviation principle for the
empirical measure of connectable receivers associated with transmitters in
large boxes. Second, making use of the observation that the receivers
connectable to the origin form a Cox point process, we derive a large-deviation
principle for the rescaled process of these receivers as the connection
threshold tends to zero. Finally, we show how these results can be used to
develop importance-sampling algorithms that substantially reduce the variance
for the estimation of probabilities of certain rare events such as users being
unable to connectComment: 29 pages, 2 figure
- …