1,573 research outputs found
Ontology Reasoning with Deep Neural Networks
The ability to conduct logical reasoning is a fundamental aspect of
intelligent behavior, and thus an important problem along the way to
human-level artificial intelligence. Traditionally, symbolic logic-based
methods from the field of knowledge representation and reasoning have been used
to equip agents with capabilities that resemble human logical reasoning
qualities. More recently, however, there has been an increasing interest in
using machine learning rather than symbolic logic-based formalisms to tackle
these tasks. In this paper, we employ state-of-the-art methods for training
deep neural networks to devise a novel model that is able to learn how to
effectively perform logical reasoning in the form of basic ontology reasoning.
This is an important and at the same time very natural logical reasoning task,
which is why the presented approach is applicable to a plethora of important
real-world problems. We present the outcomes of several experiments, which show
that our model learned to perform precise ontology reasoning on diverse and
challenging tasks. Furthermore, it turned out that the suggested approach
suffers much less from different obstacles that prohibit logic-based symbolic
reasoning, and, at the same time, is surprisingly plausible from a biological
point of view
Robust Computer Algebra, Theorem Proving, and Oracle AI
In the context of superintelligent AI systems, the term "oracle" has two
meanings. One refers to modular systems queried for domain-specific tasks.
Another usage, referring to a class of systems which may be useful for
addressing the value alignment and AI control problems, is a superintelligent
AI system that only answers questions. The aim of this manuscript is to survey
contemporary research problems related to oracles which align with long-term
research goals of AI safety. We examine existing question answering systems and
argue that their high degree of architectural heterogeneity makes them poor
candidates for rigorous analysis as oracles. On the other hand, we identify
computer algebra systems (CASs) as being primitive examples of domain-specific
oracles for mathematics and argue that efforts to integrate computer algebra
systems with theorem provers, systems which have largely been developed
independent of one another, provide a concrete set of problems related to the
notion of provable safety that has emerged in the AI safety community. We
review approaches to interfacing CASs with theorem provers, describe
well-defined architectural deficiencies that have been identified with CASs,
and suggest possible lines of research and practical software projects for
scientists interested in AI safety.Comment: 15 pages, 3 figure
Current and Future Challenges in Knowledge Representation and Reasoning
Knowledge Representation and Reasoning is a central, longstanding, and active
area of Artificial Intelligence. Over the years it has evolved significantly;
more recently it has been challenged and complemented by research in areas such
as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl
Perspectives workshop was held on Knowledge Representation and Reasoning. The
goal of the workshop was to describe the state of the art in the field,
including its relation with other areas, its shortcomings and strengths,
together with recommendations for future progress. We developed this manifesto
based on the presentations, panels, working groups, and discussions that took
place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge
Representation: its origins, goals, milestones, and current foci; its relation
to other disciplines, especially to Artificial Intelligence; and on its
challenges, along with key priorities for the next decade
- …