1,211 research outputs found

    Inferring from an imprecise Plackett–Luce model : application to label ranking

    Get PDF
    Learning ranking models is a difficult task, in which data may be scarce and cautious predictions desirable. To address such issues, we explore the extension of the popular parametric probabilistic Plackett–Luce model, often used to model rankings, to the imprecise setting where estimated parameters are set-valued. In particular, we study how to achieve cautious or conservative inference with it, and illustrate their application on label ranking problems, a specific supervised learning task

    Credal Valuation Networks for Machine Reasoning Under Uncertainty

    Full text link
    Contemporary undertakings provide limitless opportunities for widespread application of machine reasoning and artificial intelligence in situations characterised by uncertainty, hostility and sheer volume of data. The paper develops a valuation network as a graphical system for higher-level fusion and reasoning under uncertainty in support of the human operators. Valuations, which are mathematical representation of (uncertain) knowledge and collected data, are expressed as credal sets, defined as coherent interval probabilities in the framework of imprecise probability theory. The basic operations with such credal sets, combination and marginalisation, are defined to satisfy the axioms of a valuation algebra. A practical implementation of the credal valuation network is discussed and its utility demonstrated on a small scale example.Comment: 16 pages, 3 figure

    MAINTAINING OF INTERNAL CONSISTENCY OF ALGEBRAIC BAYESIAN NETWORKS WITH LINEAR AND STELLATE STRUCTURE

    Get PDF
    Subject of Research. When working with algebraic Bayesian networks, it is necessary to ensure their correctness in terms of the consistency of the probability estimates of their constituent elements.There are several approaches to automating the maintenance of consistency, characterized by their computational complexity (execution time). This complexity depends on the network structure and the chosen type of consistency. The time for internal consistency maintenance in algebraic Bayesian networks with linear and stellate structure is compared with the time for consistency maintenance of a knowledge pattern covering such networks. The comparison is based on statistical estimates. Method.The essence of the method lies in reducing the number of variables and conditions in linear programming problems which solution ensures the maintenance of internal consistency. An experiment was carried out demonstrating the differences between the time of consistency maintenance for different algebraic Bayesian networks with a global structure. Main Results. An improved version of the algorithm for internal consistency maintenance is presented.Solvable linear programming problems are simplified in comparison with the previous version of the algorithm. Two theorems are formulated and proved, refining the estimates of the number of variables and conditions in the linear programming problems to be solved, as well as the number of the problems themselves. An experiment is performed, which showed that the proposed software implementation of internal consistency maintenanceis superior in working time to software implementation of the consistency maintenanceof a complete knowledge pattern. Practical Relevance. The results obtained can be applied in machine learning of algebraic Bayesian networks (including the synthesis of their global structures). The proposed method provides optimal synthesis of global network structures for which it is enough to use the maintenance of internal consistency during learning and further network processing. Owing to the method application these processes will have acceptable computational complexity

    A scalable learning algorithm for Kernel Probabilistic Classifier

    Get PDF
    National audienceIn this paper we propose a probabilistic classification algorithm that learns a set of kernel functions that associate a probability distribution over classes to an input vector. This model is obtained by maximizing a measure over the probability distributions through a local optimization process. This measure focuses on the faithfulness of the whole probability distribution induced rather than only considering the probabilities of the classes separately. We show that, thanks to a pre-processing computation, the complexity of the evaluation of this measure with respect to a model is no longer dependent on the size of the training set. This makes the local optimization of the whole set of kernel functions tractable, even for large databases. We experiment our method on five benchmark datasets and the KDD Cup 2012 dataset
    • …
    corecore