303 research outputs found

    Learning and consistency

    Get PDF
    In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds ” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard ” spaces of hypotheses. Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently. 1

    One-Sided Error Probabalistic Inductive Interface and Reliable Frequency Identification

    Get PDF
    For EX- and BC-type identification, one-sided error probabilistic inference and reliable frequency identification on sets of functions are introduced. In particular, we relate the one to the other and characterize one-sided error probabilistic inference to exactly coincide with reliable frequency identification, on any setM. Moreover, we show that reliable EX and BC-frequency inference forms a new discrete hierarchy having the breakpoints 1, l/2, l/3, ..

    One-Sided Error Probabalistic Inductive Interface and Reliable Frequency Identification

    Get PDF
    For EX- and BC-type identification, one-sided error probabilistic inference and reliable frequency identification on sets of functions are introduced. In particular, we relate the one to the other and characterize one-sided error probabilistic inference to exactly coincide with reliable frequency identification, on any setM. Moreover, we show that reliable EX and BC-frequency inference forms a new discrete hierarchy having the breakpoints 1, l/2, l/3, ..

    On the Teachability of Randomized Learners

    Get PDF
    The present paper introduces a new model for teaching {em randomized learners}. Our new model, though based on the classical teaching dimension model, allows to study the influence of various parameters such as the learner\u27s memory size, its ability to provide or to not provide feedback, and the influence of the order in which examples are presented. Furthermore, within the new model it is possible to investigate new aspects of teaching like teaching from positive data only or teaching with inconsistent teachers. Furthermore, we provide characterization theorems for teachability from positive data for both ordinary teachers and inconsistent teachers with and without feedback

    On Learning of Functions Refutably

    Get PDF
    Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general.For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria

    Editors' Introduction to [Algorithmic Learning Theory: 21st International Conference, ALT 2010, Canberra, Australia, October 6-8, 2010. Proceedings]

    No full text
    Learning theory is an active research area that incorporates ideas, problems, and techniques from a wide range of disciplines including statistics, artificial intelligence, information theory, pattern recognition, and theoretical computer science. The research reported at the 21st International Conference on Algorithmic Learning Theory (ALT 2010) ranges over areas such as query models, online learning, inductive inference, boosting, kernel methods, complexity and learning, reinforcement learning, unsupervised learning, grammatical inference, and algorithmic forecasting. In this introduction we give an overview of the five invited talks and the regular contributions of ALT 2010

    Learning Recursive Functions Refutably

    Get PDF
    Learning of recursive functions refutably means that for every recursive function, the learning machine has either to learn this function or to refute it, i.e., to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. All these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general. For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. Then we derive hierarchies for refutable learning. Finally, we show that stricter refutability constraints cannot be traded for more liberal learning criteria

    PATHWAYS LINKING EARLY LIFE STRESS, METABOLIC SYNDROME, AND THE INFLAMMATORY MARKER FIBRINOGEN IN DEPRESSED INPATIENTS

    Get PDF
    Background: Previous research has shown that metabolic syndrome as well as early life stress can account for immunoactivation (e.g. in the form of altered fibrinogen levels) in patients with major depression. This study aims at assessing the relationship between components of metabolic syndrome, early life stress and fibrinogen levels, taking the severity of depression into consideration. Subjects and methods: Measures of early life stress and signs of metabolic syndrome were collected in 58 adult inpatients diagnosed with depression. The relationships between the factors were assessed by means of path analyses. Two main models were tested: the first model with metabolic syndrome mediating between early life stress and fibrinogen levels and the second model without the mediating effect of metabolic syndrome. Results: The first model was not supported by our data (χ²=7.02, df=1, p=0.008, CFI=0.00, NNFI=-9.44, RMSEA=0.50). The second model however provided an excellent fit for the data (χ²=0.02, df=1, p=0.90, CFI=1.00, NNFI=2.71, RMSEA=0.00). Extending the models by introducing severity of depression into them did not yield good indices of fit. Conclusions: The developmental trajectory between early life stress and inflammation appears not to be mediated by metabolic syndrome associated factors in our sample. Possible reasons including severity and type of early life stress, as well as potential epigenetic influences are discussed

    Learning via Queries with Teams and Anomalies

    Get PDF
    Most work in the field of inductive inference regards the learning machine to be a passive recipient of data. In a prior paper the passive approach was compared to an active form of learning where the machine is allowed to ask questions. In this paper we continue the study of machines that ask questions by comparing such machines to teams of passive machines. This yields, via work of Pitt and Smith, a comparison of active learning with probabilistic learning. Also considered are query inference machines that learn an approximation of what is desired. The approximation differs from the desired result in finitely many anomalous places

    Analysis of signalling pathways using continuous time Markov chains

    Get PDF
    We describe a quantitative modelling and analysis approach for signal transduction networks. We illustrate the approach with an example, the RKIP inhibited ERK pathway [CSK+03]. Our models are high level descriptions of continuous time Markov chains: proteins are modelled by synchronous processes and reactions by transitions. Concentrations are modelled by discrete, abstract quantities. The main advantage of our approach is that using a (continuous time) stochastic logic and the PRISM model checker, we can perform quantitative analysis such as what is the probability that if a concentration reaches a certain level, it will remain at that level thereafter? or how does varying a given reaction rate affect that probability? We also perform standard simulations and compare our results with a traditional ordinary differential equation model. An interesting result is that for the example pathway, only a small number of discrete data values is required to render the simulations practically indistinguishable
    corecore