205,886 research outputs found

    Entropic Priors for Discrete Probabilistic Networks and for Mixtures of Gaussians Models

    Get PDF
    The ongoing unprecedented exponential explosion of available computing power, has radically transformed the methods of statistical inference. What used to be a small minority of statisticians advocating for the use of priors and a strict adherence to bayes theorem, it is now becoming the norm across disciplines. The evolutionary direction is now clear. The trend is towards more realistic, flexible and complex likelihoods characterized by an ever increasing number of parameters. This makes the old question of: What should the prior be? to acquire a new central importance in the modern bayesian theory of inference. Entropic priors provide one answer to the problem of prior selection. The general definition of an entropic prior has existed since 1988, but it was not until 1998 that it was found that they provide a new notion of complete ignorance. This paper re-introduces the family of entropic priors as minimizers of mutual information between the data and the parameters, as in [rodriguez98b], but with a small change and a correction. The general formalism is then applied to two large classes of models: Discrete probabilistic networks and univariate finite mixtures of gaussians. It is also shown how to perform inference by efficiently sampling the corresponding posterior distributions.Comment: 24 pages, 3 figures, Presented at MaxEnt2001, APL Johns Hopkins University, August 4-9 2001. See also http://omega.albany.edu:8008

    On the Correspondence between Display Postulates and Deep Inference in Nested Sequent Calculi for Tense Logics

    Full text link
    We consider two styles of proof calculi for a family of tense logics, presented in a formalism based on nested sequents. A nested sequent can be seen as a tree of traditional single-sided sequents. Our first style of calculi is what we call "shallow calculi", where inference rules are only applied at the root node in a nested sequent. Our shallow calculi are extensions of Kashima's calculus for tense logic and share an essential characteristic with display calculi, namely, the presence of structural rules called "display postulates". Shallow calculi enjoy a simple cut elimination procedure, but are unsuitable for proof search due to the presence of display postulates and other structural rules. The second style of calculi uses deep-inference, whereby inference rules can be applied at any node in a nested sequent. We show that, for a range of extensions of tense logic, the two styles of calculi are equivalent, and there is a natural proof theoretic correspondence between display postulates and deep inference. The deep inference calculi enjoy the subformula property and have no display postulates or other structural rules, making them a better framework for proof search

    Buddhist Philosophy of Logic

    Get PDF
    Logic in Buddhist Philosophy concerns the systematic study of anumāna (often translated as inference) as developed by Dignāga (480-540 c.e.) and Dharmakīti (600-660 c.e.). Buddhist logicians think of inference as an instrument of knowledge (pramāṇa) and, thus, logic is considered to constitute part of epistemology in the Buddhist tradition. According to the prevalent 20th and early 21st century ‘Western’ conception of logic, however, logical study is the formal study of arguments. If we understand the nature of logic to be formal, it is difficult to see what bearing logic has on knowledge. In this paper, by weaving together the main threads of thought that are salient in Dignāga’s and Dharmakīti’s texts, I shall re-conceive the nature of logic in the context of epistemology and demarcate the logical part of epistemology which can be recognised as logic. I shall demonstrate that we can recognise the logical significance of inference as understood by Buddhist logicians despite the fact that its logical significance lies within the context of knowledge

    The problem of evaluating automated large-scale evidence aggregators

    Get PDF
    In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of evidence and accuracy in the treatment of individual evidence best facilitates inference. There is no prerogative to aggregate the total evidence available if this would in fact reduce overall accuracy

    Metaphysical Explanation and the Inference to the Best Explanation (BA thesis)

    Get PDF
    Inference to the Best Explanation, roughly put, appeals to the explanatory power of a theory or hypothesis (relative to some data set) as constituting epistemic justification for it. Inference to the Best Explanation (henceforth IBE) is a tool widely employed among all reasoners alike, from the empirical sciences to ordinary life. Philosophical discussions do not differ in the usualness of explanatory appeals of this kind during serious argument. Often enough, the appeal is dialectically blocked, as many of our epistemic peers in philosophy offer reasons to be skeptical of IBE. Our aim with this monograph is to assess one worry that have been raised about this mode of inference: That explanatory power is not truth-conducive. We begin by discussing general features of inferences and then formulating IBE in detail. Afterward, we explicate and apply a canonical understanding of what an explanation is. This will lead to a certain understanding of explanatory power. We undergo a case study to defend the thesis that this kind of explanatory power is indeed epistemically irrelevant – unless, perhaps, when combined with other theoretical virtues. Our conclusion is that the measure what explanations are best requires taking other theoretical virtues into account, such as simplicity and unification. In this case, a complete assessment of IBE requires examining if, when, and how these alleged theoretical virtues are indeed truth-conducive
    corecore