561,252 research outputs found

    Computational complexity of real functions

    Get PDF
    AbstractRecursive analysis, the theory of computation of functions on real numbers, has been studied from various aspects. We investigate the computational complexity of real functions using the methods of recursive function theory. Partial recursive real functions are defined and their domains are characterized as the recursively open sets. We define the time complexity of recursive real continuous functions and show that the time complexity and the modulus of uniform continuity of a function are closely related. We study the complexity of the roots and the differentiability of polynomial time computable real functions. In particular, a polynomial time computable real function may have a root of arbitrarily high complexity and may be nowhere differentiable. The concepts of the space complexity and nondeterministic computation are used to study the complexity of the integrals and the maximum values of real functions. These problems are shown to be related to the “P=?NP” and the “P=?PSPACE” questions

    Approximation Error Bounds via Rademacher's Complexity

    Get PDF
    Approximation properties of some connectionistic models, commonly used to construct approximation schemes for optimization problems with multivariable functions as admissible solutions, are investigated. Such models are made up of linear combinations of computational units with adjustable parameters. The relationship between model complexity (number of computational units) and approximation error is investigated using tools from Statistical Learning Theory, such as Talagrand's inequality, fat-shattering dimension, and Rademacher's complexity. For some families of multivariable functions, estimates of the approximation accuracy of models with certain computational units are derived in dependence of the Rademacher's complexities of the families. The estimates improve previously-available ones, which were expressed in terms of V C dimension and derived by exploiting union-bound techniques. The results are applied to approximation schemes with certain radial-basis-functions as computational units, for which it is shown that the estimates do not exhibit the curse of dimensionality with respect to the number of variables

    Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime

    Full text link
    The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable functions to instances which are `hard' for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones -- non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch's TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and int(f); and within what running time, in terms of these parameters and the guaranteed output precision 2^(-n). It turns out that Gevrey's hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity

    Polynomial Path Orders

    Full text link
    This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP*, that is applicable in two related, but distinct contexts. On the one hand POP* induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP* provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP*.Comment: LMCS version. This article supersedes arXiv:1209.379

    Scalable Hash-Based Estimation of Divergence Measures

    Full text link
    We propose a scalable divergence estimation method based on hashing. Consider two continuous random variables XX and YY whose densities have bounded support. We consider a particular locality sensitive random hashing, and consider the ratio of samples in each hash bin having non-zero numbers of Y samples. We prove that the weighted average of these ratios over all of the hash bins converges to f-divergences between the two samples sets. We show that the proposed estimator is optimal in terms of both MSE rate and computational complexity. We derive the MSE rates for two families of smooth functions; the H\"{o}lder smoothness class and differentiable functions. In particular, it is proved that if the density functions have bounded derivatives up to the order d/2d/2, where dd is the dimension of samples, the optimal parametric MSE rate of O(1/N)O(1/N) can be achieved. The computational complexity is shown to be O(N)O(N), which is optimal. To the best of our knowledge, this is the first empirical divergence estimator that has optimal computational complexity and achieves the optimal parametric MSE estimation rate.Comment: 11 pages, Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) 2018, Lanzarote, Spai
    corecore