179 research outputs found

    Characterizing the principle of minimum cross-entropy within a conditional-logical framework

    Get PDF
    AbstractThe principle of minimum cross-entropy (ME-principle) is often used as an elegant and powerful tool to build up complete probability distributions when only partial knowledge is available. The inputs it may be applied to are a prior distribution P and some new information R, and it yields as a result the one distribution P∗ that satisfies R and is closest to P in an information-theoretic sense. More generally, it provides a “best” solution to the problem “How to adjust P to R?”In this paper, we show how probabilistic conditionals allow a new and constructive approach to this important principle. Though popular and widely used for knowledge representation, conditionals quantified by probabilities are not easily dealt with. We develop four principles that describe their handling in a reasonable and consistent way, taking into consideration the conditional-logical as well as the numerical and probabilistic aspects. Finally, the ME-principle turns out to be the only method for adjusting a prior distribution to new conditional information that obeys all these principles.Thus a characterization of the ME-principle within a conditional-logical framework is achieved, and its implicit logical mechanisms are revealed clearly

    Default reasoning using maximum entropy and variable strength defaults

    Get PDF
    PhDThe thesis presents a computational model for reasoning with partial information which uses default rules or information about what normally happens. The idea is to provide a means of filling the gaps in an incomplete world view with the most plausible assumptions while allowing for the retraction of conclusions should they subsequently turn out to be incorrect. The model can be used both to reason from a given knowledge base of default rules, and to aid in the construction of such knowledge bases by allowing their designer to compare the consequences of his design with his own default assumptions. The conclusions supported by the proposed model are justified by the use of a probabilistic semantics for default rules in conjunction with the application of a rational means of inference from incomplete knowledge the principle of maximum entropy (ME). The thesis develops both the theory and algorithms for the ME approach and argues that it should be considered as a general theory of default reasoning. The argument supporting the thesis has two main threads. Firstly, the ME approach is tested on the benchmark examples required of nonmonotonic behaviour, and it is found to handle them appropriately. Moreover, these patterns of commonsense reasoning emerge as consequences of the chosen semantics rather than being design features. It is argued that this makes the ME approach more objective, and its conclusions more justifiable, than other default systems. Secondly, the ME approach is compared with two existing systems: the lexicographic approach (LEX) and system Z+. It is shown that the former can be equated with ME under suitable conditions making it strictly less expressive, while the latter is too crude to perform the subtle resolution of default conflict which the ME approach allows. Finally, a program called DRS is described which implements all systems discussed in the thesis and provides a tool for testing their behaviours.Engineering and Physical Science Research Council (EPSRC

    A Stalnakerian Analysis of Metafictive Statements

    Get PDF

    A Stalnakerian Analysis of Metafictive Statements

    Get PDF
    Because Stalnaker’s common ground framework is focussed on cooperative information exchange, it is challenging to model fictional discourse. To this end, I develop an extension of Stalnaker’s analysis of assertion that adds a temporary workspace to the common ground. I argue that my framework models metafictive discourse better than competing approaches that are based on adding unofficial common grounds

    Making Ranking Theory Useful for Psychology of Reasoning

    Get PDF
    An organizing theme of the dissertation is the issue of how to make philosophical theories useful for scientific purposes. An argument for the contention is presented that it doesn’t suffice merely to theoretically motivate one’s theories, and make them compatible with existing data, but that philosophers having this aim should ideally contribute to identifying unique and hard to vary predictions of their theories. This methodological recommendation is applied to the ranking-theoretic approach to conditionals, which emphasizes the epistemic relevance and the expression of reason relations as part of the semantics of the natural language conditional. As a first step, this approach is theoretically motivated in a comparative discussion of other alternatives in psychology of reasoning, like the suppositional theory of conditionals, and novel approaches to the problems of compositionality and accounting for the objective purport of indicative conditionals are presented. In a second step, a formal model is formulated, which allows us to derive quantitative predictions from the ranking-theoretic approach, and it is investigated which novel avenues of empirical research that this model opens up for. Finally, a treatment is given of the problem of logical omniscience as it concerns the issue of whether ranking theory (and other similar approaches) makes too idealized assumptions about rationality to allow for interesting applications in psychology of reasoning. Building on the work of Robert Brandom, a novel solution to this problem is presented, which both opens up for new perspectives in psychology of reasoning and appears to be capable of satisfying a range of constraints on bridge principles between logic and norms of reasoning, which would otherwise stand in a tension

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
    • 

    corecore