1,245 research outputs found
Calibrating Generative Models: The Probabilistic Chomsky-SchĂĽtzenberger Hierarchy
A probabilistic Chomsky–Schützenberger hierarchy of grammars is introduced and studied, with the aim of understanding the expressive power of generative models. We offer characterizations of the distributions definable at each level of the hierarchy, including probabilistic regular, context-free, (linear) indexed, context-sensitive, and unrestricted grammars, each corresponding to familiar probabilistic machine classes. Special attention is given to distributions on (unary notations for) positive integers. Unlike in the classical case where the "semi-linear" languages all collapse into the regular languages, using analytic tools adapted from the classical setting we show there is no collapse in the probabilistic hierarchy: more distributions become definable at each level. We also address related issues such as closure under probabilistic conditioning
On the instrumental value of hypothetical and counterfactual thought.
People often engage in “offline simulation”, considering what would happen if they performed certain actions in the future,
or had performed different actions in the past. Prior research shows that these simulations are biased towards actions a person considers to be good—i.e., likely to pay off. We ask whether, and why, this bias might be adaptive. Through computational experiments we compare five agents who differ only in the way they engage in offline simulation, across a variety of different environment types. Broadly speaking, our experiments reveal that simulating actions one already regards as good does in fact confer an advantage in downstream decision making, although this general pattern interacts with features of the environment in important ways. We contrast this bias with alternatives such as simulating actions whose outcomes are instead uncertain
Indicative Conditionals and Dynamic Epistemic Logic
Recent ideas about epistemic modals and indicative conditionals in formal
semantics have significant overlap with ideas in modal logic and dynamic
epistemic logic. The purpose of this paper is to show how greater interaction
between formal semantics and dynamic epistemic logic in this area can be of
mutual benefit. In one direction, we show how concepts and tools from modal
logic and dynamic epistemic logic can be used to give a simple, complete
axiomatization of Yalcin's [16] semantic consequence relation for a language
with epistemic modals and indicative conditionals. In the other direction, the
formal semantics for indicative conditionals due to Kolodny and MacFarlane [9]
gives rise to a new dynamic operator that is very natural from the point of
view of dynamic epistemic logic, allowing succinct expression of dependence (as
in dependence logic) or supervenience statements. We prove decidability for the
logic with epistemic modals and Kolodny and MacFarlane's indicative conditional
via a full and faithful computable translation from their logic to the modal
logic K45.Comment: In Proceedings TARK 2017, arXiv:1707.0825
Recommended from our members
Logics of Imprecise Comparative Probability
This paper studies connections between two alternatives to the standard probability calculus for representing and reasoning about uncertainty: imprecise probability andcomparative probability. The goal is to identify complete logics for reasoning about uncertainty in a comparative probabilistic language whose semantics is given in terms of imprecise probability. Comparative probability operators are interpreted as quantifying over a set of probability measures. Modal and dynamic operators are added for reasoning about epistemic possibility and updating sets of probability measures
A Topological Perspective on Causal Inference
This paper presents a topological learning-theoretic perspective on causal
inference by introducing a series of topologies defined on general spaces of
structural causal models (SCMs). As an illustration of the framework we prove a
topological causal hierarchy theorem, showing that substantive assumption-free
causal inference is possible only in a meager set of SCMs. Thanks to a known
correspondence between open sets in the weak topology and statistically
verifiable hypotheses, our results show that inductive assumptions sufficient
to license valid causal inferences are statistically unverifiable in principle.
Similar to no-free-lunch theorems for statistical inference, the present
results clarify the inevitability of substantial assumptions for causal
inference. An additional benefit of our topological approach is that it easily
accommodates SCMs with infinitely many variables. We finally suggest that the
framework may be helpful for the positive project of exploring and assessing
alternative causal-inductive assumptions.Comment: NeurIPS 202
- …