192 research outputs found

    Robust Inference of Trees

    Full text link
    This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m^4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.Comment: 26 pages, 7 figure

    Game-theoretic learning using the imprecise Dirichlet model

    Get PDF
    We discuss two approaches for choosing a strategy in a two-player game. We suppose that the game is played a large number of rounds, which allows the players to use observations of past play to guide them in choosing a strategy. Central in these approaches is the way the opponent's next strategy is assessed; both a precise and an imprecise Dirichlet model are used. The observations of the opponent's past strategies can then be used to update the model and obtain new assessments. To some extent, the imprecise probability approach allows us to avoid making arbitrary initial assessments. To be able to choose a strategy, the assessment of the opponent's strategy is combined with rules for selecting an optimal response to it: a so-called best response or a maximin strategy. Together with the updating procedure, this allows us to choose strategies for all the rounds of the game. The resulting playing sequence can then be analysed to investigate if the strategy choices can converge to equilibria

    Robust inference of trees

    Get PDF
    This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m 4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for tree

    Command line completion: learning and decision making using the imprecise Dirichlet model

    Get PDF

    Learning in games using the imprecise Dirichlet model

    Get PDF
    We propose a new learning model for finite strategic-form two-player games based on fictitious play and Walley’s imprecise Dirichlet model [P. Walley, Inferences from multinomial data: learning about a bag of marbles, J. Roy. Statist. Soc. B 58 (1996) 3–57]. This model allows the initial beliefs of the players about their opponent’s strategy choice to be near-vacuous or imprecise instead of being precisely defined. A similar generalization can be made as the one proposed by Fudenberg and Kreps [D. Fudenberg, D.M. Kreps, Learning mixed equilibria, Games Econ. Behav. 5 (1993) 320–367] for fictitious play, where assumptions about immediate behavior are replaced with assumptions about asymptotic behavior. We also obtain similar convergence results for this generalization: if there is convergence, it will be to an equilibrium

    A nonparametric predictive alternative to the Imprecise Dirichlet Model: the case of a known number of categories

    Get PDF
    Nonparametric Predictive Inference (NPI) is a general methodology to learn from data in the absence of prior knowledge and without adding unjustified assumptions. This paper develops NPI for multinomial data where the total number of possible categories for the data is known. We present the general upper and lower probabilities and several of their properties. We also comment on differences between this NPI approach and corresponding inferences based on Walley's Imprecise Dirichlet Model

    Command line completion: an illustration of learning and decision making using the imprecise Dirichlet model

    Get PDF
    A method of command line completion based on probabilistic models is described. The method supplements the existing deterministic ones. The probabilistic models are developed within the context of imprecise probabilities. An imprecise Dirichlet model is used to represent the assessments about all possible completions and to allow for learning by observing the commands typed previously. Due to the use of imprecise probabilities a partial (instead of a linear) ordering of the possible completion actions will be constructed during decision making. Markov models can additionally be incorporated to take recurring sequences of commands into account

    Learning in Markov models using the imprecise Dirichlet model

    Get PDF
    The objective of our research is first of all the development of a method for learning the transition probabilities in (possibly hidden) Markov models using imprecise probabilities, and next the application of this method to some real-life problems. The learning model used will be the imprecise Dirichlet model, an extension of the precise Dirichlet model to the theory of imprecise probabilities. Possible applications are gene-sequence alignment and pre-fetching of web pages

    Learning about a Categorical Latent Variable under Prior Near-Ignorance

    Full text link
    It is well known that complete prior ignorance is not compatible with learning, at least in a coherent theory of (epistemic) uncertainty. What is less widely known, is that there is a state similar to full ignorance, that Walley calls near-ignorance, that permits learning to take place. In this paper we provide new and substantial evidence that also near-ignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is latent. We argue that such a setting is by far the most common case in practice, and we show, for the case of categorical latent variables (and general manifest variables) that there is a sufficient condition that, if satisfied, prevents learning to take place under prior near-ignorance. This condition is shown to be easily satisfied in the most common statistical problems.Comment: 15 LaTeX page

    Imprecise probability models for inference in exponential families

    Get PDF
    When considering sampling models described by a distribution from an exponential family, it is possible to create two types of imprecise probability models. One is based on the corresponding conjugate distribution and the other on the corresponding predictive distribution. In this paper, we show how these types of models can be constructed for any (regular, linear, canonical) exponential family, such as the centered normal distribution. To illustrate the possible use of such models, we take a look at credal classification. We show that they are very natural and potentially promising candidates for describing the attributes of a credal classifier, also in the case of continuous attributes
    corecore