38,033 research outputs found

    The parameterized space complexity of model-checking bounded variable first-order logic

    Get PDF
    The parameterized model-checking problem for a class of first-order sentences (queries) asks to decide whether a given sentence from the class holds true in a given relational structure (database); the parameter is the length of the sentence. We study the parameterized space complexity of the model-checking problem for queries with a bounded number of variables. For each bound on the quantifier alternation rank the problem becomes complete for the corresponding level of what we call the tree hierarchy, a hierarchy of parameterized complexity classes defined via space bounded alternating machines between parameterized logarithmic space and fixed-parameter tractable time. We observe that a parameterized logarithmic space model-checker for existential bounded variable queries would allow to improve Savitch's classical simulation of nondeterministic logarithmic space in deterministic space O(log2n)O(\log^2n). Further, we define a highly space efficient model-checker for queries with a bounded number of variables and bounded quantifier alternation rank. We study its optimality under the assumption that Savitch's Theorem is optimal

    Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks

    Full text link
    Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very good generalization performance in the over-parameterization regime, where DNNs can easily fit a random labeling of the training data. Very recently, a line of work explains in theory that with over-parameterization and proper random initialization, gradient-based methods can find the global minima of the training loss for DNNs. However, existing generalization error bounds are unable to explain the good generalization performance of over-parameterized DNNs. The major limitation of most existing generalization bounds is that they are based on uniform convergence and are independent of the training algorithm. In this work, we derive an algorithm-dependent generalization error bound for deep ReLU networks, and show that under certain assumptions on the data distribution, gradient descent (GD) with proper random initialization is able to train a sufficiently over-parameterized DNN to achieve arbitrarily small generalization error. Our work sheds light on explaining the good generalization performance of over-parameterized deep neural networks.Comment: 27 pages. This version simplifies the proof and improves the presentation in Version 3. In AAAI 202

    Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime

    Full text link
    The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable functions to instances which are `hard' for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones -- non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch's TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and int(f); and within what running time, in terms of these parameters and the guaranteed output precision 2^(-n). It turns out that Gevrey's hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity

    How the structure of precedence constraints may change the complexity class of scheduling problems

    Full text link
    This survey aims at demonstrating that the structure of precedence constraints plays a tremendous role on the complexity of scheduling problems. Indeed many problems can be NP-hard when considering general precedence constraints, while they become polynomially solvable for particular precedence constraints. We also show that there still are many very exciting challenges in this research area

    An Exponential Lower Bound on the Complexity of Regularization Paths

    Full text link
    For a variety of regularized optimization problems in machine learning, algorithms computing the entire solution path have been developed recently. Most of these methods are quadratic programs that are parameterized by a single parameter, as for example the Support Vector Machine (SVM). Solution path algorithms do not only compute the solution for one particular value of the regularization parameter but the entire path of solutions, making the selection of an optimal parameter much easier. It has been assumed that these piecewise linear solution paths have only linear complexity, i.e. linearly many bends. We prove that for the support vector machine this complexity can be exponential in the number of training points in the worst case. More strongly, we construct a single instance of n input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) = \Theta(2^d) many distinct subsets of support vectors occur as the regularization parameter changes.Comment: Journal version, 28 Pages, 5 Figure

    Parameterized bounded-depth Frege is not optimal

    Get PDF
    A general framework for parameterized proof complexity was introduced by Dantchev, Martin, and Szeider [9]. There the authors concentrate on tree-like Parameterized Resolution-a parameterized version of classical Resolution-and their gap complexity theorem implies lower bounds for that system. The main result of the present paper significantly improves upon this by showing optimal lower bounds for a parameterized version of bounded-depth Frege. More precisely, we prove that the pigeonhole principle requires proofs of size n in parameterized bounded-depth Frege, and, as a special case, in dag-like Parameterized Resolution. This answers an open question posed in [9]. In the opposite direction, we interpret a well-known technique for FPT algorithms as a DPLL procedure for Parameterized Resolution. Its generalization leads to a proof search algorithm for Parameterized Resolution that in particular shows that tree-like Parameterized Resolution allows short refutations of all parameterized contradictions given as bounded-width CNF's
    corecore