60,057 research outputs found

    Inductive inference of recursive functions: complexity bounds

    Get PDF
    This survey includes principal results on complexity of inductive inference for recursively enumerable classes of total recursive functions. Inductive inference is a process to find an algorithm from sample computations. In the case when the given class of functions is recursively enumerable it is easy to define a natural complexity measure for the inductive inference, namely, the worst-case mindchange number for the first n functions in the given class. Surely, the complexity depends not only on the class, but also on the numbering, i.e. which function is the first, which one is the second, etc. It turns out that, if the result of inference is Goedel number, then complexity of inference may vary between log n+o(log2n ) and an arbitrarily slow recursive function. If the result of the inference is an index in the numbering of the recursively enumerable class, then the complexity may go up to const-n. Additionally, effects previously found in the Kolmogorov complexity theory are discovered in the complexity of inductive inference as well

    Complexity Characterization in a Probabilistic Approach to Dynamical Systems Through Information Geometry and Inductive Inference

    Full text link
    Information geometric techniques and inductive inference methods hold great promise for solving computational problems of interest in classical and quantum physics, especially with regard to complexity characterization of dynamical systems in terms of their probabilistic description on curved statistical manifolds. In this article, we investigate the possibility of describing the macroscopic behavior of complex systems in terms of the underlying statistical structure of their microscopic degrees of freedom by use of statistical inductive inference and information geometry. We review the Maximum Relative Entropy (MrE) formalism and the theoretical structure of the information geometrodynamical approach to chaos (IGAC) on statistical manifolds. Special focus is devoted to the description of the roles played by the sectional curvature, the Jacobi field intensity and the information geometrodynamical entropy (IGE). These quantities serve as powerful information geometric complexity measures of information-constrained dynamics associated with arbitrary chaotic and regular systems defined on the statistical manifold. Finally, the application of such information geometric techniques to several theoretical models are presented.Comment: 29 page

    Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis

    Full text link
    Even with impressive advances in automated formal methods, certain problems in system verification and synthesis remain challenging. Examples include the verification of quantitative properties of software involving constraints on timing and energy consumption, and the automatic synthesis of systems from specifications. The major challenges include environment modeling, incompleteness in specifications, and the complexity of underlying decision problems. This position paper proposes sciduction, an approach to tackle these challenges by integrating inductive inference, deductive reasoning, and structure hypotheses. Deductive reasoning, which leads from general rules or concepts to conclusions about specific problem instances, includes techniques such as logical inference and constraint solving. Inductive inference, which generalizes from specific instances to yield a concept, includes algorithmic learning from examples. Structure hypotheses are used to define the class of artifacts, such as invariants or program fragments, generated during verification or synthesis. Sciduction constrains inductive and deductive reasoning using structure hypotheses, and actively combines inductive and deductive reasoning: for instance, deductive techniques generate examples for learning, and inductive reasoning is used to guide the deductive engines. We illustrate this approach with three applications: (i) timing analysis of software; (ii) synthesis of loop-free programs, and (iii) controller synthesis for hybrid systems. Some future applications are also discussed

    Towards a Statistical Geometrodynamics

    Get PDF
    Can the spatial distance between two identical particles be explained in terms of the extent that one can be distinguished from the other? Is the geometry of space a macroscopic manifestation of an underlying microscopic statistical structure? Is geometrodynamics derivable from general principles of inductive inference? Tentative answers are suggested by a model of geometrodynamics based on the statistical concepts of entropy, information geometry, and entropic dynamics.Comment: Invited talk at the Decoherence, Information, Entropy, and Complexity Workshop, DICE02, September 2000, Piombino, Ital

    A graph regularization based approach to transductive class-membership prediction

    Get PDF
    Considering the increasing availability of structured machine processable knowledge in the context of the Semantic Web, only relying on purely deductive inference may be limiting. This work proposes a new method for similarity-based class-membership prediction in Description Logic knowledge bases. The underlying idea is based on the concept of propagating class-membership information among similar individuals; it is non-parametric in nature and characterised by interesting complexity properties, making it a potential candidate for large-scale transductive inference. We also evaluate its effectiveness with respect to other approaches based on inductive inference in SW literature

    Editors’ Introduction to [Algorithmic Learning Theory: 18th International Conference, ALT 2007, Sendai, Japan, October 1-4, 2007. Proceedings]

    No full text
    Learning theory is an active research area that incorporates ideas, problems, and techniques from a wide range of disciplines including statistics, artificial intelligence, information theory, pattern recognition, and theoretical computer science. The research reported at the 18th International Conference on Algorithmic Learning Theory (ALT 2007) ranges over areas such as unsupervised learning, inductive inference, complexity and learning, boosting and reinforcement learning, query learning models, grammatical inference, online learning and defensive forecasting, and kernel methods. In this introduction we give an overview of the five invited talks and the regular contributions of ALT 2007

    Editors' Introduction to [Algorithmic Learning Theory: 21st International Conference, ALT 2010, Canberra, Australia, October 6-8, 2010. Proceedings]

    No full text
    Learning theory is an active research area that incorporates ideas, problems, and techniques from a wide range of disciplines including statistics, artificial intelligence, information theory, pattern recognition, and theoretical computer science. The research reported at the 21st International Conference on Algorithmic Learning Theory (ALT 2010) ranges over areas such as query models, online learning, inductive inference, boosting, kernel methods, complexity and learning, reinforcement learning, unsupervised learning, grammatical inference, and algorithmic forecasting. In this introduction we give an overview of the five invited talks and the regular contributions of ALT 2010

    Neuron with Steady Response Leads to Better Generalization

    Full text link
    Regularization can mitigate the generalization gap between training and inference by introducing inductive bias. Existing works have already proposed various inductive biases from diverse perspectives. However, none of them explores inductive bias from the perspective of class-dependent response distribution of individual neurons. In this paper, we conduct a substantial analysis of the characteristics of such distribution. Based on the analysis results, we articulate the Neuron Steadiness Hypothesis: the neuron with similar responses to instances of the same class leads to better generalization. Accordingly, we propose a new regularization method called Neuron Steadiness Regularization (NSR) to reduce neuron intra-class response variance. Based on the Complexity Measure, we theoretically guarantee the effectiveness of NSR for improving generalization. We conduct extensive experiments on Multilayer Perceptron, Convolutional Neural Networks, and Graph Neural Networks with popular benchmark datasets of diverse domains, which show that our Neuron Steadiness Regularization consistently outperforms the vanilla version of models with significant gain and low additional computational overhead.Comment: Accepted by NeurIPS'2
    • …
    corecore