18,550 research outputs found
Simple trees in complex forests: Growing Take The Best by Approximate Bayesian Computation
How can heuristic strategies emerge from smaller building blocks? We propose
Approximate Bayesian Computation as a computational solution to this problem.
As a first proof of concept, we demonstrate how a heuristic decision strategy
such as Take The Best (TTB) can be learned from smaller, probabilistically
updated building blocks. Based on a self-reinforcing sampling scheme, different
building blocks are combined and, over time, tree-like non-compensatory
heuristics emerge. This new algorithm, coined Approximately Bayesian Computed
Take The Best (ABC-TTB), is able to recover a data set that was generated by
TTB, leads to sensible inferences about cue importance and cue directions, can
outperform traditional TTB, and allows to trade-off performance and
computational effort explicitly
Are Reasons Causally Relevant for Action? Dharmakīrti and the Embodied Cognition Paradigm
How do mental states come to be about something other than their own operations, and thus to serve as ground for effective action? This papers argues that causation in the mental domain should be understood to function on principles of intelligibility (that is, on principles which make it perfectly intelligible for intentions to have a causal role in initiating behavior) rather than on principles of mechanism (that is, on principles which explain how causation works in the physical domain). The paper considers Dharmakīrti’s kāryānumāna argument (that is, the argument that an inference is sound only when one infers from the effect to the cause and not vice versa), and proposes a naturalized account of reasons. On this account, careful scrutiny of the effect can provide a basis for ascertaining the unique causal totality that is its source, but only for reasoning that is context‐specific
Efficient computational strategies to learn the structure of probabilistic graphical models of cumulative phenomena
Structural learning of Bayesian Networks (BNs) is a NP-hard problem, which is
further complicated by many theoretical issues, such as the I-equivalence among
different structures. In this work, we focus on a specific subclass of BNs,
named Suppes-Bayes Causal Networks (SBCNs), which include specific structural
constraints based on Suppes' probabilistic causation to efficiently model
cumulative phenomena. Here we compare the performance, via extensive
simulations, of various state-of-the-art search strategies, such as local
search techniques and Genetic Algorithms, as well as of distinct regularization
methods. The assessment is performed on a large number of simulated datasets
from topologies with distinct levels of complexity, various sample size and
different rates of errors in the data. Among the main results, we show that the
introduction of Suppes' constraints dramatically improve the inference
accuracy, by reducing the solution space and providing a temporal ordering on
the variables. We also report on trade-offs among different search techniques
that can be efficiently employed in distinct experimental settings. This
manuscript is an extended version of the paper "Structural Learning of
Probabilistic Graphical Models of Cumulative Phenomena" presented at the 2018
International Conference on Computational Science
- …