354 research outputs found

    Learning Recursive Functions Refutably

    Get PDF
    Learning of recursive functions refutably means that for every recursive function, the learning machine has either to learn this function or to refute it, i.e., to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. All these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general. For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. Then we derive hierarchies for refutable learning. Finally, we show that stricter refutability constraints cannot be traded for more liberal learning criteria

    On Learning of Functions Refutably

    Get PDF
    Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general.For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria

    Learning and consistency

    Get PDF
    In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds ” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard ” spaces of hypotheses. Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently. 1

    Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity

    Get PDF
    The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's minimal sufficient statistic. In general we show that data compression is almost always the best strategy, both in hypothesis identification and prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor

    Mechanized semantics

    Get PDF
    The goal of this lecture is to show how modern theorem provers---in this case, the Coq proof assistant---can be used to mechanize the specification of programming languages and their semantics, and to reason over individual programs and over generic program transformations, as typically found in compilers. The topics covered include: operational semantics (small-step, big-step, definitional interpreters); a simple form of denotational semantics; axiomatic semantics and Hoare logic; generation of verification conditions, with application to program proof; compilation to virtual machine code and its proof of correctness; an example of an optimizing program transformation (dead code elimination) and its proof of correctness

    One-Sided Error Probabalistic Inductive Interface and Reliable Frequency Identification

    Get PDF
    For EX- and BC-type identification, one-sided error probabilistic inference and reliable frequency identification on sets of functions are introduced. In particular, we relate the one to the other and characterize one-sided error probabilistic inference to exactly coincide with reliable frequency identification, on any setM. Moreover, we show that reliable EX and BC-frequency inference forms a new discrete hierarchy having the breakpoints 1, l/2, l/3, ..

    One-Sided Error Probabalistic Inductive Interface and Reliable Frequency Identification

    Get PDF
    For EX- and BC-type identification, one-sided error probabilistic inference and reliable frequency identification on sets of functions are introduced. In particular, we relate the one to the other and characterize one-sided error probabilistic inference to exactly coincide with reliable frequency identification, on any setM. Moreover, we show that reliable EX and BC-frequency inference forms a new discrete hierarchy having the breakpoints 1, l/2, l/3, ..

    Inductive Pattern Formation

    Get PDF
    With the extended computational limits of algorithmic recursion, scientific investigation is transitioning away from computationally decidable problems and beginning to address computationally undecidable complexity. The analysis of deductive inference in structure-property models are yielding to the synthesis of inductive inference in process-structure simulations. Process-structure modeling has examined external order parameters of inductive pattern formation, but investigation of the internal order parameters of self-organization have been hampered by the lack of a mathematical formalism with the ability to quantitatively define a specific configuration of points. This investigation addressed this issue of quantitative synthesis. Local space was developed by the Poincare inflation of a set of points to construct neighborhood intersections, defining topological distance and introducing situated Boolean topology as a local replacement for point-set topology. Parallel development of the local semi-metric topological space, the local semi-metric probability space, and the local metric space of a set of points provides a triangulation of connectivity measures to define the quantitative architectural identity of a configuration and structure independent axes of a structural configuration space. The recursive sequence of intersections constructs a probabilistic discrete spacetime model of interacting fields to define the internal order parameters of self-organization, with order parameters external to the configuration modeled by adjusting the morphological parameters of individual neighborhoods and the interplay of excitatory and inhibitory point sets. The evolutionary trajectory of a configuration maps the development of specific hierarchical structure that is emergent from a specific set of initial conditions, with nested boundaries signaling the nonlinear properties of local causative configurations. This exploration of architectural configuration space concluded with initial process-structure-property models of deductive and inductive inference spaces. In the computationally undecidable problem of human niche construction, an adaptive-inductive pattern formation model with predictive control organized the bipartite recursion between an information structure and its physical expression as hierarchical ensembles of artificial neural network-like structures. The union of architectural identity and bipartite recursion generates a predictive structural model of an evolutionary design process, offering an alternative to the limitations of cognitive descriptive modeling. The low computational complexity of these models enable them to be embedded in physical constructions to create the artificial life forms of a real-time autonomously adaptive human habitat
    • …
    corecore