126 research outputs found
The Logic of Empirical Theories Revisited
Logic and philosophy of science share a long history, though contacts have gone through ups and downs. This paper is a brief survey of some major themes in logical studies of empirical theories, including links to computer science and current studies of rational agency. The survey has no new results: we just try to make some things into common knowledge
Logic, Probability and Action: A Situation Calculus Perspective
The unification of logic and probability is a long-standing concern in AI,
and more generally, in the philosophy of science. In essence, logic provides an
easy way to specify properties that must hold in every possible world, and
probability allows us to further quantify the weight and ratio of the worlds
that must satisfy a property. To that end, numerous developments have been
undertaken, culminating in proposals such as probabilistic relational models.
While this progress has been notable, a general-purpose first-order knowledge
representation language to reason about probabilities and dynamics, including
in continuous settings, is still to emerge. In this paper, we survey recent
results pertaining to the integration of logic, probability and actions in the
situation calculus, which is arguably one of the oldest and most well-known
formalisms. We then explore reduction theorems and programming interfaces for
the language. These results are motivated in the context of cognitive robotics
(as envisioned by Reiter and his colleagues) for the sake of concreteness.
Overall, the advantage of proving results for such a general language is that
it becomes possible to adapt them to any special-purpose fragment, including
but not limited to popular probabilistic relational models
Symbolic Logic meets Machine Learning: A Brief Survey in Infinite Domains
The tension between deduction and induction is perhaps the most fundamental
issue in areas such as philosophy, cognition and artificial intelligence (AI).
The deduction camp concerns itself with questions about the expressiveness of
formal languages for capturing knowledge about the world, together with proof
systems for reasoning from such knowledge bases. The learning camp attempts to
generalize from examples about partial descriptions about the world. In AI,
historically, these camps have loosely divided the development of the field,
but advances in cross-over areas such as statistical relational learning,
neuro-symbolic systems, and high-level control have illustrated that the
dichotomy is not very constructive, and perhaps even ill-formed. In this
article, we survey work that provides further evidence for the connections
between logic and learning. Our narrative is structured in terms of three
strands: logic versus learning, machine learning for logic, and logic for
machine learning, but naturally, there is considerable overlap. We place an
emphasis on the following "sore" point: there is a common misconception that
logic is for discrete properties, whereas probability theory and machine
learning, more generally, is for continuous properties. We report on results
that challenge this view on the limitations of logic, and expose the role that
logic can play for learning in infinite domains
A Survey on Knowledge Graphs: Representation, Acquisition and Applications
Human knowledge provides a formal understanding of the world. Knowledge
graphs that represent structural relations between entities have become an
increasingly popular research direction towards cognition and human-level
intelligence. In this survey, we provide a comprehensive review of knowledge
graph covering overall research topics about 1) knowledge graph representation
learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph,
and 4) knowledge-aware applications, and summarize recent breakthroughs and
perspective directions to facilitate future research. We propose a full-view
categorization and new taxonomies on these topics. Knowledge graph embedding is
organized from four aspects of representation space, scoring function, encoding
models, and auxiliary information. For knowledge acquisition, especially
knowledge graph completion, embedding methods, path inference, and logical rule
reasoning, are reviewed. We further explore several emerging topics, including
meta relational learning, commonsense reasoning, and temporal knowledge graphs.
To facilitate future research on knowledge graphs, we also provide a curated
collection of datasets and open-source libraries on different tasks. In the
end, we have a thorough outlook on several promising research directions
Implicitly Learning to Reason in First-Order Logic
We consider the problem of answering queries about formulas of first-order
logic based on background knowledge partially represented explicitly as other
formulas, and partially represented as examples independently drawn from a
fixed probability distribution. PAC semantics, introduced by Valiant, is one
rigorous, general proposal for learning to reason in formal languages: although
weaker than classical entailment, it allows for a powerful model theoretic
framework for answering queries while requiring minimal assumptions about the
form of the distribution in question. To date, however, the most significant
limitation of that approach, and more generally most machine learning
approaches with robustness guarantees, is that the logical language is
ultimately essentially propositional, with finitely many atoms. Indeed, the
theoretical findings on the learning of relational theories in such generality
have been resoundingly negative. This is despite the fact that first-order
logic is widely argued to be most appropriate for representing human knowledge.
In this work, we present a new theoretical approach to robustly learning to
reason in first-order logic, and consider universally quantified clauses over a
countably infinite domain. Our results exploit symmetries exhibited by
constants in the language, and generalize the notion of implicit learnability
to show how queries can be computed against (implicitly) learned first-order
background knowledge.Comment: In Fourth International Workshop on Declarative Learning Based
Programming (DeLBP 2019
A Simple Logic of Functional Dependence
This paper presents a simple decidable logic of functional dependence LFD,
based on an extension of classical propositional logic with dependence atoms
plus dependence quantifiers treated as modalities, within the setting of
generalized assignment semantics for first order logic. The expressive
strength, complete proof calculus and meta-properties of LFD are explored.
Various language extensions are presented as well, up to undecidable
modal-style logics for independence and dynamic logics of changing dependence
models. Finally, more concrete settings for dependence are discussed:
continuous dependence in topological models, linear dependence in vector
spaces, and temporal dependence in dynamical systems and games.Comment: 56 pages. Journal of Philosophical Logic (2021
A Question of Fidelity
A review of Peter Hallward (ed.), Think Again: Alain Badiou and the Future of Philosophy Continuum, London, 2004. ISBN: HB: 0-8264-5906-4, PB: 0-8264-5907-2.The review surveys the contents of Peter Hallward, ed., Think Again: Alain Badiou and the Future of Philosophy and seeks to categorize the various kinds of contributions. It finds that, while some of the essays offer valuable critiques of Badiou's work or else clarify key concepts in productive ways, others challenge aspects of his thought for reasons that are demonstrably spurious. The overall assessment, though, is that the book provides a useful and timely appraisal of Badiou#39;s impact on contemporary philosophy
Logic and Commonsense Reasoning: Lecture Notes
MasterThese are the lecture notes of a course on logic and commonsense reasoning given to master students in philosophy of the University of Rennes 1. N.B.: Some parts of these lectures notes are sometimes largely based on or copied verbatim from publications of other authors. When this is the case, these parts are mentioned at the end of each chapter in the section âFurther readingâ
- âŠ