102,975 research outputs found
Binding bound variables in epistemic contexts
ABSTRACT Quine insisted that the satisfaction of an open modalised formula by an object depends on how that object is described. Kripke's ‘objectual’ interpretation of quantified modal logic, whereby variables are rigid, is commonly thought to avoid these Quinean worries. Yet there remain residual Quinean worries for epistemic modality. Theorists have recently been toying with assignment-shifting treatments of epistemic contexts. On such views an epistemic operator ends up binding all the variables in its scope. One might worry that this yields the undesirable result that any attempt to ‘quantify in’ to an epistemic environment is blocked. If quantifying into the relevant constructions is vacuous, then such views would seem hopelessly misguided and empirically inadequate. But a famous alternative to Kripke's semantics, namely Lewis' counterpart semantics, also faces this worry since it also treats the boxes and diamonds as assignment-shifting devices. As I'll demonstrate, the mere fact that a variable is bound is no obstacle to binding it. This provides a helpful lesson for those modelling de re epistemic contexts with assignment sensitivity, and perhaps leads the way toward the proper treatment of binding in both metaphysical and epistemic contexts: Kripke for metaphysical modality, Lewis for epistemic modality
Decidability of quantified propositional intuitionistic logic and S4 on trees
Quantified propositional intuitionistic logic is obtained from propositional
intuitionistic logic by adding quantifiers \forall p, \exists p over
propositions. In the context of Kripke semantics, a proposition is a subset of
the worlds in a model structure which is upward closed. Kremer (1997) has shown
that the quantified propositional intuitionistic logic H\pi+ based on the class
of all partial orders is recursively isomorphic to full second-order logic. He
raised the question of whether the logic resulting from restriction to trees is
axiomatizable. It is shown that it is, in fact, decidable. The methods used can
also be used to establish the decidability of modal S4 with propositional
quantification on similar types of Kripke structures.Comment: v2, 9 pages, corrections and additions; v1 8 page
Induction of Interpretable Possibilistic Logic Theories from Relational Data
The field of Statistical Relational Learning (SRL) is concerned with learning
probabilistic models from relational data. Learned SRL models are typically
represented using some kind of weighted logical formulas, which make them
considerably more interpretable than those obtained by e.g. neural networks. In
practice, however, these models are often still difficult to interpret
correctly, as they can contain many formulas that interact in non-trivial ways
and weights do not always have an intuitive meaning. To address this, we
propose a new SRL method which uses possibilistic logic to encode relational
models. Learned models are then essentially stratified classical theories,
which explicitly encode what can be derived with a given level of certainty.
Compared to Markov Logic Networks (MLNs), our method is faster and produces
considerably more interpretable models.Comment: Longer version of a paper appearing in IJCAI 201
The Role of Existential Quantification in Scientific Realism
Scientific realism holds that the terms in our scientific theories refer and that we should believe in their existence. This presupposes a certain understanding of quantification, namely that it is ontologically committing, which I challenge in this paper. I argue that the ontological loading of the quantifiers is smuggled in through restricting the domains of quantification, without which it is clear to see that quantifiers are ontologically neutral. Once we remove domain restrictions, domains of quantification can include non-existent things, as they do in scientific theorizing. Scientific realism would therefore require redefining without presupposing a view of ontologically committing quantification
Recommended from our members
Thinking intuitively: the rich (and at times illogical) world of concepts
Intuitive knowledge of the world involves knowing what kinds of things have which properties. We express it in generalities such as “ducks lay eggs”. It contrasts with extensional knowledge about actual individuals in the world, which we express in quantified statements such as “All US Presidents are male”. Reasoning based on this intuitive knowledge, while highly fluent and plausible may in fact lead us into logical fallacy. Several lines of research point to our conceptual memory as the source of this logical failure. We represent concepts with prototypical properties, judging likelihood and argument strength on the basis of similarity between ideas. Evidence that our minds represent the world in this intuitive way can be seen in a range of phenomena, including how people interpret logical connectives applied to everyday concepts, studies of creativity and emergence in conceptual combination, and demonstrations of the logically inconsistent beliefs that people express in their everyday language
- …