52,809 research outputs found
Declarative Modeling and Bayesian Inference of Dark Matter Halos
Probabilistic programming allows specification of probabilistic models in a
declarative manner. Recently, several new software systems and languages for
probabilistic programming have been developed on the basis of newly developed
and improved methods for approximate inference in probabilistic models. In this
contribution a probabilistic model for an idealized dark matter localization
problem is described. We first derive the probabilistic model for the inference
of dark matter locations and masses, and then show how this model can be
implemented using BUGS and Infer.NET, two software systems for probabilistic
programming. Finally, the different capabilities of both systems are discussed.
The presented dark matter model includes mainly non-conjugate factors, thus, it
is difficult to implement this model with Infer.NET.Comment: Presented at the Workshop "Intelligent Information Processing",
EUROCAST2013. To appear in selected papers of Computer Aided Systems Theory -
EUROCAST 2013; Volumes Editors: Roberto Moreno-D\'iaz, Franz R. Pichler,
Alexis Quesada-Arencibia; LNCS Springe
Applications of Nonclassical Logic Methods for Purposes of Knowledge Discovery and Data Mining
* The work is partially supported by Grant no. NIP917 of the Ministry of Science and Education – Republic of Bulgaria.Methods for solution of a large class of problems on the base of nonclassical, multiple-valued, and
probabilistic logics have been discussed. A theory of knowledge about changing knowledge, of defeasible
inference, and network approach to an analogous derivation have been suggested. A method for regularity
search, logic-axiomatic and logic-probabilistic methods for learning of terms and pattern recognition in the case of
multiple-valued logic have been described and generalized. Defeasible analogical inference and new forms of
inference using exclusions are considered. The methods are applicable in a broad range of intelligent systems
World model learning and inference
Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various interactions with the world including the self, which are not defined in advance and are vary over time. The challenge of building human-like intelligent machines, as well as progress in brain science and behavioural analyses, robotics, and their associated theoretical formalisations, speaks to the importance of the world-model learning and inference. In this article, after briefly surveying the history and challenges of internal model learning and probabilistic learning, we introduce the free energy principle, which provides a useful framework within which to consider neuronal computation and probabilistic world models. Next, we showcase examples of human behaviour and cognition explained under that principle. We then describe symbol emergence in the context of probabilistic modelling, as a topic at the frontiers of cognitive robotics. Lastly, we review recent progress in creating human-like intelligence by using novel probabilistic programming languages. The striking consensus that emerges from these studies is that probabilistic descriptions of learning and inference are powerful and effective ways to create human-like artificial intelligent machines and to understand intelligence in the context of how humans interact with their world
A review on probabilistic graphical models in evolutionary computation
Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Evolutionary algorithms is one such discipline that has employed probabilistic graphical models to improve the search for optimal solutions in complex problems. This paper shows how probabilistic graphical models have been used in evolutionary algorithms to improve their performance in solving complex problems. Specifically, we give a survey of probabilistic model building-based evolutionary algorithms, called estimation of distribution algorithms, and compare different methods for probabilistic modeling in these algorithms
Precise Propagation of Upper and Lower Probability Bounds in System P
In this paper we consider the inference rules of System P in the framework of
coherent imprecise probabilistic assessments. Exploiting our algorithms, we
propagate the lower and upper probability bounds associated with the
conditional assertions of a given knowledge base, automatically obtaining the
precise probability bounds for the derived conclusions of the inference rules.
This allows a more flexible and realistic use of System P in default reasoning
and provides an exact illustration of the degradation of the inference rules
when interpreted in probabilistic terms. We also examine the disjunctive Weak
Rational Monotony of System P+ proposed by Adams in his extended probability
logic.Comment: 8 pages -8th Intl. Workshop on Non-Monotonic Reasoning NMR'2000,
April 9-11, Breckenridge, Colorad
Recommended from our members
Bayesian Structural Causal Inference with Probabilistic Programming
Reasoning about causal relationships is central to the human experience. This evokes a natural question in our pursuit of human-like artificial intelligence: how might we imbue intelligent systems with similar causal reasoning capabilities? Better yet, how might we imbue intelligent systems with the ability to learn cause and effect relationships from observation and experimentation? Unfortunately, reasoning about cause and effect requires more than just data: it also requires partial knowledge about data generating mechanisms. Given this need, our task then as computational scientists is to design data structures for representing partial causal knowledge, and algorithms for updating that knowledge in light of observations and experiments. In this dissertation, I explore the Bayesian structural approach to causal inference in which probability distributions over structural causal models are one such data structure, and probabilistic inference in multi-world transformations of those models as the corresponding algorithmic task. Specifically, I demonstrate that this approach has two distinct advantages over the dominant computational paradigm of causal graphical models: (i) it expands the breadth of compatible assumptions; and (ii) it seamlessly integrates with modern Bayesian modeling and inference technologies to facilitate quantification of uncertainty about causal structure and the effects of interventions.
Specifically, doing so allows the emerging and powerful technology of probabilistic programming to be brought to bear on a large and diverse set of causal inference problems. In Chapter 3, I present an example-driven pedagogical introduction to the Bayesian structural approach to causal inference, demonstrating how priors over structural causal models induce joint distributions over observed and latent counterfactual random variables, and how the resulting posterior distributions capture common motifs in causal inference. In particular, I show how various assumptions about latent confounding influence our ability to estimate causal effects from data and I provide examples of common observational and quasi-experimental designs expressed as probabilistic programs. In Chapter 4, I present an advanced application of the Bayesian structural approach for modeling hierarchical relational dependencies with latent confounders, and how to combine such assumptions with flexible Gaussian process models. In Chapter 5, I present a prototype software implementation for causal inference using probabilistic programming, accommodating a broad class of multi-source observational and experimental data. Finally, in Chapter 6, I present Simulation-Based Identifiability, a gradient-based optimization method for determining if any differentiable and bounded prior over structural causal models converges to a unique causal conclusion asymptotically
- …