1,242 research outputs found

    Combining Models of Approximation with Partial Learning

    Full text link
    In Gold's framework of inductive inference, the model of partial learning requires the learner to output exactly one correct index for the target object and only the target object infinitely often. Since infinitely many of the learner's hypotheses may be incorrect, it is not obvious whether a partial learner can be modifed to "approximate" the target object. Fulk and Jain (Approximate inference and scientific method. Information and Computation 114(2):179--191, 1994) introduced a model of approximate learning of recursive functions. The present work extends their research and solves an open problem of Fulk and Jain by showing that there is a learner which approximates and partially identifies every recursive function by outputting a sequence of hypotheses which, in addition, are also almost all finite variants of the target function. The subsequent study is dedicated to the question how these findings generalise to the learning of r.e. languages from positive data. Here three variants of approximate learning will be introduced and investigated with respect to the question whether they can be combined with partial learning. Following the line of Fulk and Jain's research, further investigations provide conditions under which partial language learners can eventually output only finite variants of the target language. The combinabilities of other partial learning criteria will also be briefly studied.Comment: 28 page

    Logical Omnipotence and Two notions of Implicit Belief

    Get PDF
    The most widespread models of rational reasoners (the model based on modal epistemic logic and the model based on probability theory) exhibit the problem of logical omniscience. The most common strategy for avoiding this problem is to interpret the models as describing the explicit beliefs of an ideal reasoner, but only the implicit beliefs of a real reasoner. I argue that this strategy faces serious normative issues. In this paper, I present the more fundamental problem of logical omnipotence, which highlights the normative content of the problem of logical omniscience. I introduce two developments of the notion of implicit belief (accessible and stable belief ) and use them in two versions of the most common strategy applied to the problem of logical omnipotence

    A Map of Update Constraints in Inductive Inference

    Full text link
    We investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another. We give a complete map for nine different restrictions both for the cases of complete information learning and set-driven learning. This completes the picture for these well-studied \emph{delayable} learning restrictions. A further insight is gained by different characterizations of \emph{conservative} learning in terms of variants of \emph{cautious} learning. Our analyses greatly benefit from general theorems we give, for example showing that learners with exclusively delayable restrictions can always be assumed total.Comment: fixed a mistake in Theorem 21, result is the sam

    Towards an Atlas of Computational Learning Theory

    Get PDF
    A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about trade-offs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power. With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth. In this paper we work toward this goal by providing three example maps, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure

    Learning and consistency

    Get PDF
    In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds ” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard ” spaces of hypotheses. Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently. 1

    Approximations in Learning & Program Analysis

    Get PDF
    In this work we compare and contrast the approximations made in the problems of Data Compression, Program Analysis and Supervised Machine Learning. G\uf6del\u2019s Incompleteness Theorem mandates that any formal system rich enough to include integers will have unprovable truths. Thus non computable problems abound, including, but not limited to, Program Analysis, Data Compression and Machine Learning. Indeed, it can be shown that there are more non-computable functions than computable. Due to non- computability, precise solutions for these problems are not feasible, and only approximate solutions may be computed. Presently, each of these problems of Data Compression, Machine Learning and Program Analysis is studied independently. Each problem has it\u2019s own multitude of abstractions, algorithms and notions of tradeoffs among the various parameters. It would be interesting to have a unified framework, across disciplines, that makes explicit the abstraction specifications and ensuing tradeoffs. Such a framework would promote inter-disciplinary research and develop a unified body of knowledge to tackle non-computable problems. As a small step to that larger goal, we propose an Information Oriented Model of Computation that allows comparing the approximations used in Data Compression, Program Analysis and Machine Learning. To the best of our knowledge, this is the first work to propose a method for systematic comparison of approximations across disciplines. The model describes computation as set reconstruction. Non-computability is then presented as inability to perfectly reconstruct sets. In an effort to compare and contrast the approximations, select algorithms for Data Compression, Machine Learning and Program Analysis are analyzed using our model. We were able to relate the problems of Data Compression, Machine Learning and Program Analysis as specific instances of the general problem of approximate set reconstruction. We demonstrate the use of abstract interpreters in compression schemes. We then compare and contrast the approximations in Program Analysis and Supervised Machine Learning. We demonstrate the use of ordered structures, fixpoint equations and least fixpoint approximation computations, all characteristic of Abstract Interpretation (Program Analysis) in Machine Learning algorithms. We also present the idea that widening, like regression, is an inductive learner. Regression generalizes known states to a hypothesis. Widening generalizes abstract states on a iteration chain to a fixpoint. While Regression usually aims to minimize the total error (sum of false positives and false negatives), Widening aims for soundness and hence errs on the side of false positives to have zero false negatives. We use this duality to derive a generic widening operator from regression on the set of abstract states. The results of the dissertation are the first steps towards a unified approach to approximate computation. Consequently, our preliminary results lead to a lot more interesting questions, some of which we have tried to discuss in the concluding chapter

    Noisy inference and oracles

    Get PDF

    Effective strategies for enumeration games

    Get PDF
    We study the existence of effective winning strategies in certain infinite games, so called enumeration games. Originally, these were introduced by Lachlan (1970) in his study of the lattice of recursively enumerable sets. We argue that they provide a general and interesting framework for computable games and may also be well suited for modelling reactive systems. Our results are obtained by reductions of enumeration games to regular games. For the latter effective winning strategies exist by a classical result of Buechi and Landweber. This provides more perspicuous proofs for several of Lachlan\u27s results as well as a key for new results. It also shows a way of how strategies for regular games can be scaled up such that they apply to much more general games
    • …
    corecore