212 research outputs found
Preprints of Proceedings of GWAI-92
This is a preprint of the proceedings of the German Workshop on Artificial Intelligence (GWAI) 1992. The final version will appear in the Lecture Notes in Artificial Intelligence
Automatic Generation of Personalized Recommendations in eCoaching
Denne avhandlingen omhandler eCoaching for personlig livsstilsstøtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er å designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede løsningen er fokusert på forbedring av fysisk aktivitet. Prototypen bruker bærbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for å utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen på teknologisk verifisering snarere enn klinisk evaluering.publishedVersio
Proceedings of the IJCAI-09 Workshop on Nonmonotonic Reasoning, Action and Change
Copyright in each article is held by the authors.
Please contact the authors directly for permission to reprint or use this material in any form for any purpose.The biennial workshop on Nonmonotonic Reasoning, Action
and Change (NRAC) has an active and loyal community.
Since its inception in 1995, the workshop has been held seven
times in conjunction with IJCAI, and has experienced growing
success. We hope to build on this success again this eighth
year with an interesting and fruitful day of discussion.
The areas of reasoning about action, non-monotonic reasoning
and belief revision are among the most active research
areas in Knowledge Representation, with rich inter-connections
and practical applications including robotics, agentsystems,
commonsense reasoning and the semantic web.
This workshop provides a unique opportunity for researchers
from all three fields to be brought together at a single forum
with the prime objectives of communicating important recent
advances in each field and the exchange of ideas. As these
fundamental areas mature it is vital that researchers maintain
a dialog through which they can cooperatively explore
common links. The goal of this workshop is to work against
the natural tendency of such rapidly advancing fields to drift
apart into isolated islands of specialization.
This year, we have accepted ten papers authored by a diverse
international community. Each paper has been subject
to careful peer review on the basis of innovation, significance
and relevance to NRAC. The high quality selection of work
could not have been achieved without the invaluable help of
the international Program Committee.
A highlight of the workshop will be our invited speaker
Professor Hector Geffner from ICREA and UPF in Barcelona,
Spain, discussing representation and inference in modern
planning. Hector Geffner is a world leader in planning,
reasoning, and knowledge representation; in addition to his
many important publications, he is a Fellow of the AAAI, an
associate editor of the Journal of Artificial Intelligence Research
and won an ACM Distinguished Dissertation Award
in 1990
Is Neuro-Symbolic AI Meeting its Promise in Natural Language Processing? A Structured Review
Advocates for Neuro-Symbolic Artificial Intelligence (NeSy) assert that
combining deep learning with symbolic reasoning will lead to stronger AI than
either paradigm on its own. As successful as deep learning has been, it is
generally accepted that even our best deep learning systems are not very good
at abstract reasoning. And since reasoning is inextricably linked to language,
it makes intuitive sense that Natural Language Processing (NLP), would be a
particularly well-suited candidate for NeSy. We conduct a structured review of
studies implementing NeSy for NLP, with the aim of answering the question of
whether NeSy is indeed meeting its promises: reasoning, out-of-distribution
generalization, interpretability, learning and reasoning from small data, and
transferability to new domains. We examine the impact of knowledge
representation, such as rules and semantic networks, language structure and
relational structure, and whether implicit or explicit reasoning contributes to
higher promise scores. We find that systems where logic is compiled into the
neural network lead to the most NeSy goals being satisfied, while other factors
such as knowledge representation, or type of neural architecture do not exhibit
a clear correlation with goals being met. We find many discrepancies in how
reasoning is defined, specifically in relation to human level reasoning, which
impact decisions about model architectures and drive conclusions which are not
always consistent across studies. Hence we advocate for a more methodical
approach to the application of theories of human reasoning as well as the
development of appropriate benchmarks, which we hope can lead to a better
understanding of progress in the field. We make our data and code available on
github for further analysis.Comment: Surve
Planning while Believing to Know
Over the last few years, the concept of Artificial Intelligence (AI) has become essential in our daily life and in several working scenarios. Among the various branches of AI, automated planning and the study of multi-agent systems are central research fields. This thesis focuses on a combination of these two areas: that is, a specialized kind of planning known as Multi-agent Epistemic Planning. This field of research is concentrated on all those scenarios where agents, reasoning in the space of knowledge/beliefs, try to find a plan to reach a desirable state from a starting one. This requires agents able to reason about her/his and others’ knowledge/beliefs and, therefore, capable of performing epistemic reasoning. Being aware of the information flows and the others’ states of mind is, in fact, a key aspect in several planning situations. That is why developing autonomous agents, that can reason considering the perspectives of their peers, is paramount to model a variety of real-world domains.
The objective of our work is to formalize an environment where a complete characterization of the agents’ knowledge/beliefs interactions and updates are possible. In particular, we achieved such a goal by defining a new action-based language for Multi-agent Epistemic Planning and implementing epistemic planners based on it. These solvers, flexible enough to reason about various domains and different nuances of knowledge/belief update, can provide a solid base for further research on epistemic reasoning or real-base applications.
This dissertation also proposes the design of a more general epistemic planning architecture. This architecture, following famous cognitive theories, tries to emulate some characteristics of the human decision-making process. In particular, we envisioned a system composed of several solving processes, each one with its own trade-off between efficiency and correctness, which are arbitrated by a meta-cognitive module
Integrating Planning and Learning for Agents Acting in Unknown Environments
An Artificial Intelligence (AI) agent acting in an environment can perceive the environment through sensors and execute actions through actuators. Symbolic planning provides an agent with decision-making capabilities about the actions
to execute for accomplishing tasks in the environment. For applying symbolic planning, an agent needs to know its symbolic state, and an abstract model of the environment dynamics. However, in the real world, an agent has low-level
perceptions of the environment (e.g. its position given by a GPS sensor), rather than symbolic observations representing its current state. Furthermore, in many real-world scenarios, it is not feasible to provide an agent with a complete and correct model of the environment, e.g., when the environment is unknown a priori. The gap between the high-level representations, suitable for symbolic planning, and the low-level sensors and actuators, available in a real-world agent, can be bridged by integrating learning, planning, and acting. Firstly, an agent has to map its continuous perceptions into its current symbolic state, e.g. by detecting the set of objects and their properties from an RGB image provided by an onboard camera. Afterward, the agent has to build a model of the environment by interacting with the environment and observing the effects of the executed actions. Finally, the agent has to plan on the learned environment model and execute the symbolic actions through its actuators. We propose an architecture that integrates learning, planning, and acting. Our approach combines data-driven learning methods for building an environment model online with symbolic planning techniques for reasoning on the learned model. In particular, we focus on learning the environment model, from either continuous or symbolic observations, assuming the agent perceptual input is the complete and correct state of the environment, and the agent is able to execute symbolic actions in the environment. Afterward, we assume a partial model of the environment and the capability of mapping perceptions into noisy and incomplete symbolic states are given, and the agent has to exploit the environment model and its perception capabilities to perform tasks in unknown and partially observable environments. Then, we tackle the problem of online learning the mapping between continuous perceptions and symbolic states, assuming the agent is given a partial model of the environment and is able to execute symbolic actions in the real world. In our approach, we take advantage of learning methods for overcoming some of the simplifying assumptions of symbolic planning, such as the full observability of the environment, or the need of having a correct environment model. Similarly, we take advantage of symbolic planning techniques to enable an agent to autonomously gather relevant information online, which is necessary for data-driven learning methods. We experimentally show the effectiveness of our approach in simulated and complex environments, outperforming state-of-the-art methods. Finally, we empirically demonstrate the applicability of our approach in real environments, by conducting experiments on a real robot
Metalogic and the psychology of reasoning.
The central topic of the thesis is the relationship between logic and the cognitive
psychology of reasoning. This topic is treated in large part through a detailed examination
of the recent work of P. N. Johnson-Laird, who has elaborated a widely-read and
influential theory in the field. The thesis is divided into two parts, of which the first is a
more general and philosophical coverage of some of the most central issues to be faced in
relating psychology to logic, while the second draws upon this as introductory material for
a critique of Johnson-Laird's `Mental Model' theory, particularly as it applies to syllogistic
reasoning.
An approach similar to Johnson-Laird's is taken to cognitive psychology, which centrally
involves the notion of computation. On this view, a cognitive model presupposes an
algorithm which can be seen as specifying the behaviour of a system in ideal conditions.
Such behaviour is closely related to the notion of `competence' in reasoning, and this in
turn is often described in terms of logic. Insofar as a logic is taken to specify the competence
of reasoners in some domain, it forms a set of conditions on the 'input-output'
behaviour of the system, to be accounted for by the algorithm. Cognitive models, however,
must also be subjected to empirical test, and indeed are commonly built in a highly
empirical manner. A strain can therefore develop between the empirical and the logical
pressures on a theory of reasoning.
Cognitive theories thus become entangled in a web of recently much-discussed issues
concerning the rationality of human reasoners and the justification of a logic as a normative
system. There has been an increased interest in the view that logic is subject to revision
and development, in which there is a recognised place for the influence of psychological
investigation. It is held, in this thesis, that logic and psychology are revealed by these considerations
to be interdetermining in interesting ways, under the general a priori requirement
that people are in an important and particular sense rational.
Johnson-Laird's theory is a paradigm case of the sort of cognitive theory dealt with
here. It is especially significant in view of the strong claims he makes about its relation to
logic, and the role the latter plays in its justification and in its interpretation. The theory is claimed to be revealing about fundamental issues in semantics, and the nature of rationality.
These claims are examined in detail, and several crucial ones refuted. Johnson-
Laird's models are found to be wanting in the level of empirical support provided, and in
their ability to found the considerable structure of explanation they are required to bear.
They fail, most importantly, to be distinguishable from certain other kinds of models, at a
level of theory where the putative differences are critical.
The conclusion to be drawn is that the difficulties in this field are not yet properly
appreciated. Psychological explantion requires a complexity which is hard to reconcile
with the clarity and simplicity required for logical insights
Automated Deduction – CADE 28
This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions
- …