2,767 research outputs found

    Reasoning with uncertainty using Nilsson's probabilistic logic and the maximum entropy formalism

    Get PDF
    An expert system must reason with certain and uncertain information. This thesis is concerned with the process of Reasoning with Uncertainty. Nilsson's elegant model of "Probabilistic Logic" has been chosen as the framework for this investigation, and the information theoretical aspect of the maximum entropy formalism as the inference engine. These two formalisms, although semantically compelling, offer major complexity problems to the implementor. Probabilistic Logic models the complete uncertainty space, and the maximum entropy formalism finds the least commitment probability distribution within the uncertainty space. The main finding in this thesis is that Nilsson's Probabilistic Logic can be successfully developed beyond the structure proposed by Nilsson. Some deficiencies in Nilsson's model have been uncovered in the area of probabilistic representation, making Probabilistic Logic less powerful than Bayesian Inference techniques. These deficiencies are examined and a new model of entailment is presented which overcomes these problems, allowing Probabilistic Logic the full representational power of Bayesian Inferencing. The new model also preserves an important extension which Nilsson's Probabilistic Logic has over Bayesian Inference: the ability to use uncertain evidence. Traditionally, the probabilistic, solution proposed by the maximum entropy formalism is arrived at by solving non-linear simultaneous equations for the aggregate factors of the non- linear terms. In the new model the maximum entropy algorithms are shown to have the highly desirable property of tractability. Although these problems have been solved for probabilistic entailment the problems of complexity are still prevalent in large databases of expert rules. This thesis also considers the use of heuristics and meta level reasoning in a complex knowledge base. Finally, a description of an expert system using these techniques is given

    Heuristic assignment of CPDs for probabilistic inference in junction trees

    Get PDF
    Many researches have been done for efficient computation of probabilistic queries posed to Bayesian networks (BN). One of the popular architectures for exact inference on BNs is the Junction Tree (JT) based architecture. Among all the different architectures developed, HUGIN is the most efficient JT-based architecture. The Global Propagation (GP) method used in the HUGIN architecture is arguably one of the best methods for probabilistic inference in BNs. Before the propagation, initialization is done to obtain the potential for each cluster in the JT. Then with the GP method, each cluster potential becomes cluster marginal through passing messages with its neighboring clusters. Improvements have been proposed by many researchers to make this message propagation more efficient. Still the GP method can be very slow for dense networks. As BNs are applied to larger, more complex, and realistic applications, developing more efficient inference algorithm has become increasingly important. Towards this goal, in this paper, we present some heuristics for initialization that avoids unnecessary message passing among clusters of the JT and therefore it improves the performance of the architecture by passing lesser messages

    The Universe is not a Computer

    Full text link
    When we want to predict the future, we compute it from what we know about the present. Specifically, we take a mathematical representation of observed reality, plug it into some dynamical equations, and then map the time-evolved result back to real-world predictions. But while this computational process can tell us what we want to know, we have taken this procedure too literally, implicitly assuming that the universe must compute itself in the same manner. Physical theories that do not follow this computational framework are deemed illogical, right from the start. But this anthropocentric assumption has steered our physical models into an impossible corner, primarily because of quantum phenomena. Meanwhile, we have not been exploring other models in which the universe is not so limited. In fact, some of these alternate models already have a well-established importance, but are thought to be mathematical tricks without physical significance. This essay argues that only by dropping our assumption that the universe is a computer can we fully develop such models, explain quantum phenomena, and understand the workings of our universe. (This essay was awarded third prize in the 2012 FQXi essay contest; a new afterword compares and contrasts this essay with Robert Spekkens' first prize entry.)Comment: 10 pages with new afterword; matches published versio

    Generalized belief change with imprecise probabilities and graphical models

    Get PDF
    We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored

    Tuple-Independent Representations of Infinite Probabilistic Databases

    Full text link
    Probabilistic databases (PDBs) are probability spaces over database instances. They provide a framework for handling uncertainty in databases, as occurs due to data integration, noisy data, data from unreliable sources or randomized processes. Most of the existing theory literature investigated finite, tuple-independent PDBs (TI-PDBs) where the occurrences of tuples are independent events. Only recently, Grohe and Lindner (PODS '19) introduced independence assumptions for PDBs beyond the finite domain assumption. In the finite, a major argument for discussing the theoretical properties of TI-PDBs is that they can be used to represent any finite PDB via views. This is no longer the case once the number of tuples is countably infinite. In this paper, we systematically study the representability of infinite PDBs in terms of TI-PDBs and the related block-independent disjoint PDBs. The central question is which infinite PDBs are representable as first-order views over tuple-independent PDBs. We give a necessary condition for the representability of PDBs and provide a sufficient criterion for representability in terms of the probability distribution of a PDB. With various examples, we explore the limits of our criteria. We show that conditioning on first order properties yields no additional power in terms of expressivity. Finally, we discuss the relation between purely logical and arithmetic reasons for (non-)representability
    corecore