108,064 research outputs found

    Artificial Neural Network in Cosmic Landscape

    Get PDF
    In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.Comment: v2, add some new content

    Computational Complexity of Smooth Differential Equations

    Full text link
    The computational complexity of the solutions hh to the ordinary differential equation h(0)=0h(0)=0, hā€²(t)=g(t,h(t))h'(t) = g(t, h(t)) under various assumptions on the function gg has been investigated. Kawamura showed in 2010 that the solution hh can be PSPACE-hard even if gg is assumed to be Lipschitz continuous and polynomial-time computable. We place further requirements on the smoothness of gg and obtain the following results: the solution hh can still be PSPACE-hard if gg is assumed to be of class C1C^1; for each kā‰„2k\ge2, the solution hh can be hard for the counting hierarchy even if gg is of class CkC^k.Comment: 15 pages, 3 figure

    Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime

    Full text link
    The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable functions to instances which are `hard' for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones -- non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch's TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and int(f); and within what running time, in terms of these parameters and the guaranteed output precision 2^(-n). It turns out that Gevrey's hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity

    Sparse Deterministic Approximation of Bayesian Inverse Problems

    Get PDF
    We present a parametric deterministic formulation of Bayesian inverse problems with input parameter from infinite dimensional, separable Banach spaces. In this formulation, the forward problems are parametric, deterministic elliptic partial differential equations, and the inverse problem is to determine the unknown, parametric deterministic coefficients from noisy observations comprising linear functionals of the solution. We prove a generalized polynomial chaos representation of the posterior density with respect to the prior measure, given noisy observational data. We analyze the sparsity of the posterior density in terms of the summability of the input data's coefficient sequence. To this end, we estimate the fluctuations in the prior. We exhibit sufficient conditions on the prior model in order for approximations of the posterior density to converge at a given algebraic rate, in terms of the number NN of unknowns appearing in the parameteric representation of the prior measure. Similar sparsity and approximation results are also exhibited for the solution and covariance of the elliptic partial differential equation under the posterior. These results then form the basis for efficient uncertainty quantification, in the presence of data with noise
    • ā€¦
    corecore