392 research outputs found

    Subdirect products of groups and the n-(n+1)-(n+2) Conjecture

    Full text link
    We analyse the subgroup structure of direct products of groups. Earlier work on this topic has revealed that higher finiteness properties play a crucial role in determining which groups appear as subgroups of direct products of free groups or limit groups. Here, we seek to relate the finiteness properties of a subgroup to the way it is embedded in the ambient product. To this end we formulate a conjecture on finiteness properties of fibre products of groups. We present different approaches to this conjecture, proving a general result on finite generation of homology groups of fibre products and, for certain special cases, results on the stronger finiteness properties F_n and FP_n.Comment: 32 page

    Nuclei embedded in an electron gas

    Full text link
    The properties of nuclei embedded in an electron gas are studied within the relativistic mean-field approach. These studies are relevant for nuclear properties in astrophysical environments such as neutron-star crusts and supernova explosions. The electron gas is treated as a constant background in the Wigner-Seitz cell approximation. We investigate the stability of nuclei with respect to alpha and beta decay. Furthermore, the influence of the electronic background on spontaneous fission of heavy and superheavy nuclei is analyzed. We find that the presence of the electrons leads to stabilizing effects for both α\alpha decay and spontaneous fission for high electron densities. Furthermore, the screening effect shifts the proton dripline to more proton-rich nuclei, and the stability line with respect to beta decay is shifted to more neutron-rich nuclei. Implications for the creation and survival of very heavy nuclear systems are discussed.Comment: 35 pages, latex+ep

    Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory

    Full text link
    This book aims to provide an introduction to the topic of deep learning algorithms. We review essential components of deep learning algorithms in full mathematical detail including different artificial neural network (ANN) architectures (such as fully-connected feedforward ANNs, convolutional ANNs, recurrent ANNs, residual ANNs, and ANNs with batch normalization) and different optimization algorithms (such as the basic stochastic gradient descent (SGD) method, accelerated methods, and adaptive methods). We also cover several theoretical aspects of deep learning algorithms such as approximation capacities of ANNs (including a calculus for ANNs), optimization theory (including Kurdyka-{\L}ojasiewicz inequalities), and generalization errors. In the last part of the book some deep learning approximation methods for PDEs are reviewed including physics-informed neural networks (PINNs) and deep Galerkin methods. We hope that this book will be useful for students and scientists who do not yet have any background in deep learning at all and would like to gain a solid foundation as well as for practitioners who would like to obtain a firmer mathematical understanding of the objects and methods considered in deep learning.Comment: 601 pages, 36 figures, 45 source code

    An overview on deep learning-based approximation methods for partial differential equations

    Full text link
    It is one of the most challenging problems in applied mathematics to approximatively solve high-dimensional partial differential equations (PDEs). Recently, several deep learning-based approximation algorithms for attacking this problem have been proposed and tested numerically on a number of examples of high-dimensional PDEs. This has given rise to a lively field of research in which deep learning-based methods and related Monte Carlo methods are applied to the approximation of high-dimensional PDEs. In this article we offer an introduction to this field of research, we review some of the main ideas of deep learning-based approximation methods for PDEs, we revisit one of the central mathematical results for deep neural network approximations for PDEs, and we provide an overview of the recent literature in this area of research.Comment: 23 page

    Counterexamples to local Lipschitz and local H\"older continuity with respect to the initial values for additive noise driven SDEs with smooth drift coefficient functions with at most polynomially growing derivatives

    Full text link
    In the recent article [A. Jentzen, B. Kuckuck, T. M\"uller-Gronbach, and L. Yaroslavtseva, arXiv:1904.05963 (2019)] it has been proved that the solutions to every additive noise driven stochastic differential equation (SDE) which has a drift coefficient function with at most polynomially growing first order partial derivatives and which admits a Lyapunov-type condition (ensuring the the existence of a unique solution to the SDE) depend in a logarithmically H\"older continuous way on their initial values. One might then wonder whether this result can be sharpened and whether in fact, SDEs from this class necessarily have solutions which depend locally Lipschitz continuously on their initial value. The key contribution of this article is to establish that this is not the case. More precisely, we supply a family of examples of additive noise driven SDEs which have smooth drift coefficient functions with at most polynomially growing derivatives whose solutions do not depend on their initial value in a locally Lipschitz continuous, nor even in a locally H\"older continuous way.Comment: 27 page
    corecore