337,860 research outputs found
A survey of statistical network models
Networks are ubiquitous in science and have become a focal point for
discussion in everyday life. Formal statistical models for the analysis of
network data have emerged as a major topic of interest in diverse areas of
study, and most of these involve a form of graphical representation.
Probability models on graphs date back to 1959. Along with empirical studies in
social psychology and sociology from the 1960s, these early works generated an
active network community and a substantial literature in the 1970s. This effort
moved into the statistical literature in the late 1970s and 1980s, and the past
decade has seen a burgeoning network literature in statistical physics and
computer science. The growth of the World Wide Web and the emergence of online
networking communities such as Facebook, MySpace, and LinkedIn, and a host of
more specialized professional network communities has intensified interest in
the study of networks and network data. Our goal in this review is to provide
the reader with an entry point to this burgeoning literature. We begin with an
overview of the historical development of statistical network modeling and then
we introduce a number of examples that have been studied in the network
literature. Our subsequent discussion focuses on a number of prominent static
and dynamic network models and their interconnections. We emphasize formal
model descriptions, and pay special attention to the interpretation of
parameters and their estimation. We end with a description of some open
problems and challenges for machine learning and statistics.Comment: 96 pages, 14 figures, 333 reference
Physics-informed Neural Networks for Solving Inverse Problems of Nonlinear Biot's Equations: Batch Training
In biomedical engineering, earthquake prediction, and underground energy
harvesting, it is crucial to indirectly estimate the physical properties of
porous media since the direct measurement of those are usually
impractical/prohibitive. Here we apply the physics-informed neural networks to
solve the inverse problem with regard to the nonlinear Biot's equations.
Specifically, we consider batch training and explore the effect of different
batch sizes. The results show that training with small batch sizes, i.e., a few
examples per batch, provides better approximations (lower percentage error) of
the physical parameters than using large batches or the full batch. The
increased accuracy of the physical parameters, comes at the cost of longer
training time. Specifically, we find the size should not be too small since a
very small batch size requires a very long training time without a
corresponding improvement in estimation accuracy. We find that a batch size of
8 or 32 is a good compromise, which is also robust to additive noise in the
data. The learning rate also plays an important role and should be used as a
hyperparameter.Comment: arXiv admin note: text overlap with arXiv:2002.0823
- …