9 research outputs found

    What Is a Decision Problem? Designing Alternatives

    Get PDF
    International audienceThis paper presents a general framework for the design of alternatives in decision problems. The paper addresses both the issue of how to design alternatives within "known decision spaces" and on how to perform the same action within "partially known or unknown decision spaces". The paper aims at providing archetypes for the design of algorithms supporting the generation of alternatives

    Defeasible Logic Programming: An Argumentative Approach

    Full text link
    The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP provides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query q will succeed when there is an argument A for q that is warranted, ie, the argument A that supports q is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting approach is suitable for representing agent's knowledge and for providing an argumentation based reasoning mechanism to agents.Comment: 43 pages, to appear in the journal "Theory and Practice of Logic Programming

    Logic-based Technologies for Multi-agent Systems: A Systematic Literature Review

    Get PDF
    Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computerscientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI” – in particular, logic-based ones will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones

    Distributed knowledge bases : A proposal for argumentation-based semantics with cooperation

    Get PDF
    O objectivo principal desta dissertação é definir um ambiente de negociação, baseada em argumentação, para bases de conhecimento distribuídas. As bases de conhecimentos são modeladas sobre um ambiente multiagente tal que cada agente possui uma base de conhecimento própria. As bases de conhecimento dos diversos agentes podem ser independentes ou podem incluir conhecimentos comuns. O requisito mínimo para haver negociação num ambiente multiagente é que os agentes tenham a capacidade de fazer propostas, que poderão ser aceites ou rejeitadas. Numa abordagem mais sofisticada, os agentes poderão responder com contra-propostas, com o intuito de alterar aspectos insatisfatórios da pro­ posta original. Um tipo ainda mais elaborado de negociação será o baseado em argumentação. A metáfora da argumentação parece ser adequada à modelação de situações em que os diferentes agentes interagem com o propósito de determinar o significado das crenças comuns. Numa negociação baseada em argumentação, as (contra­) propostas de um agente podem ser acompanhadas de argumentos a favor da sua aceitação. Um agente poderá, então, ter um argumento aceitável para uma sua crença, se conseguir argumentar com sucesso contra os argumentos, dos outros agentes, que o atacam. Assim, as crenças de um agente caracterizam-se pela relação entre os argumentos "internos" que sustentam suas crenças, e os argumentos "externos" que sustentam crenças contraditórias de outros agentes. Portanto, o raciocínio argumentativo baseia-se na "estabilidade externa" dos argumentos aceitáveis do conjunto de agentes. Neste trabalho propõe-se uma negociação baseada em argumentação em que, para chegarem a um consenso quanto ao conhecimento comum, os agentes constroem argumentos que sustentam as suas crenças ou que se opõem aos argumentos dos agentes que as contradizem. Além disso, esta proposta lida com conhecimento incompleto (i.e., argumentos parciais) pela definição de um processo de cooperação que permite completar tal conhecimento. Assim, a negociação entre agentes é um processo argumentativo-cooperativo, em que se podem alternar os argumentos contra e a favor das crenças de um agente. Para a formação das suas crenças, a cada agente Ag está associado um conjunto Cooperate de agentes com quem coopera e um outro Argue de agentes contra quem argumenta. A negociação proposta permite a modelação de bases de conhecimento hierárquicas, representando, por exemplo, a estrutura de uma organização ou uma taxonomia nalgum domínio, e de ambientes multi-agente em que cada agente representa o conhecimento referente a um determinado período de tempo. Um agente também pode ser inquirido sobre a verdade de uma crença, dependendo a resposta do agente em questão e de quais os agentes que com ele cooperam e que a ele se opõem. Essa resposta será, no entanto, sempre consistente/ paraconsistente com as bases de conhecimento dos agentes envolvidos. Esta dissertação propõe semânticas (declarativa e operacional) da argumentação numa base de conhecimento de um agente. Partindo destas, propõe, também, semântica declarativa da negociação baseada em argumentação num ambiente multi-agente. ⓿⓿⓿ ABSTRACT: The main objective of this dissertation is to define an argumentation-based negotiation framework for distributed knowledge bases. Knowledge bases are modelling over a multi-agent setting such that each agent possibly has an independent or overlapping knowledge base. The minimum requirement for a multi-agent setting negotiation is that agents should be able to make proposals which can then either be accepted or rejected. A higher level of sophistication occurs when recipients do not just have the choice of accepting or rejecting proposals, but have the option of making counter offers to alter aspects of the proposal which are unsatisfactory. An even more elaborate kind of negotiation is argumentation-based. The argumentation metaphor seems to be adequate for modelling situations where different agents argue in order to determine the meaning of common beliefs. ln an argumentation-based negotiation, the agents are able to send justifications or arguments along with (counter) proposals indicating why they should be accepted. An argument for an agent's belief is acceptable if the agent can argue successfully against attacking arguments from other agents. Thus, agent's beliefs are characterized by the relation between its "internal" arguments supporting its beliefs and the "external" arguments supporting the contradictory beliefs of other agents. So, in a certain sense, argumentative reasoning is based on the "external stability" of acceptable arguments in the multi-agent setting. This dissertation proposes that agents evaluate arguments to obtain a consensus about a common knowledge by both proposing arguments or trying to build opposing arguments against them. Moreover, this proposal deals with incomplete knowledge (i.e. partial arguments) and so a cooperation process grants arguments to achieve knowledge completeness. Therefore, a negotiation of an agent's belief is seen as an argumentation-based process with cooperation; both cooperation and argumentation are seen as interlaced processes. Furthermore, each agent Ag has both set Argue of argumentative agents and set Cooperate of cooperative agents; every Ag must reach a consensus on its arguments with agents in Argue, and Ag may ask for arguments from agents in Cooperate to complete its partial arguments. The argumentation-based negotiation proposal allows the modelling a hierarchy of knowledge bases representing, for instance, a business's organization or a taxonomy of some subject, and also an MAS where each agent represents "acquired knowledge" in a different period of time. Furthermore, any agent in an MAS can be queried regarding the truth value of some belief. It depends on from which agent such a belief is inferred, and also what the specification in both Argue and Cooperate is, given the overall agents in the MAS. However, such an answer will always be consistent/paraconsistent with the agents' knowledge base involved. This dissertation proposes a (declarative and operational) argumentation semantics for an agent's knowledge base. Furthermore, it proposes a declarative argumentation-based negotiation semantics for a multi-agent setting, which uses most of the definitions from the former semantics

    Disjunctive argumentation semantics (DAS) for reasoning over distributed uncertain knowledge.

    Get PDF
    by Benson, Ng Hin Kwong.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 111-117).Abstract also in Chinese.Chapter 1 --- Introduction --- p.9Chapter 1.1 --- Our approach --- p.11Chapter 1.2 --- Organization of the thesis --- p.12Chapter 2 --- Logic Programming --- p.13Chapter 2.1 --- Logic programming in Horn clauses --- p.14Chapter 2.1.1 --- Problem with incomplete information --- p.15Chapter 2.1.2 --- Problem with inconsistent information --- p.15Chapter 2.1.3 --- Problem with indefinite information --- p.16Chapter 2.2 --- Logic programming in non-Horn clauses --- p.16Chapter 2.2.1 --- Reasoning under incomplete information --- p.17Chapter 2.2.2 --- Reasoning under inconsistent information --- p.17Chapter 2.2.3 --- Reasoning under indefinite information --- p.20Chapter 2.3 --- "Coexistence of incomplete, inconsistent and indefinite information" --- p.21Chapter 2.4 --- Stable semantics --- p.22Chapter 2.5 --- Well-founded semantics --- p.23Chapter 2.6 --- Chapter summary --- p.25Chapter 3 --- Argumentation --- p.26Chapter 3.1 --- Toulmin's informal argumentation model --- p.27Chapter 3.2 --- Rescher's formal argumentation model --- p.28Chapter 3.3 --- Argumentation in AI research --- p.30Chapter 3.3.1 --- Poole's Logical Framework for Default Reasoning --- p.30Chapter 3.3.2 --- Inheritance Reasoning Framework of Touretzky et. al --- p.31Chapter 3.3.3 --- Pollock's Theory of Defeasible Reasoning --- p.32Chapter 3.3.4 --- Dung's Abstract Argumentation Framework --- p.33Chapter 3.3.5 --- Lin and Shoham's Argument System --- p.35Chapter 3.3.6 --- Vreeswijk's Abstract Argumentation --- p.35Chapter 3.3.7 --- Kowalski and Toni's Uniform Argumentation --- p.36Chapter 3.3.8 --- John Fox's Qualitative Argumentation --- p.37Chapter 3.3.9 --- Thomas Gordon's Pleading Games --- p.38Chapter 3.3.10 --- Chris Reed's Persuasive Dialogue --- p.39Chapter 3.3.11 --- Ronald Loui's Argument Game --- p.39Chapter 3.3.12 --- "Verheij's Reason-Based, Logics and CumulA" --- p.40Chapter 3.3.13 --- Prakken's Defeasible Argumentation --- p.40Chapter 3.3.14 --- Summary of existing frameworks --- p.41Chapter 3.4 --- Chapter summary --- p.42Chapter 4 --- Disjunctive Argumentation Semantics I --- p.46Chapter 4.1 --- Background --- p.47Chapter 4.2 --- Definition --- p.48Chapter 4.3 --- Conflicts within a KBS --- p.52Chapter 4.4 --- Conflicts between KBSs --- p.54Chapter 4.4.1 --- Credulous View --- p.56Chapter 4.4.2 --- Skeptical View --- p.57Chapter 4.4.3 --- Generalized Skeptical View --- p.58Chapter 4.5 --- Semantics --- p.60Chapter 4.6 --- Dialectical proof theory --- p.61Chapter 4.7 --- Relation to existing framework --- p.61Chapter 4.8 --- Issue on paraconsistency --- p.63Chapter 4.9 --- An illustrative example --- p.63Chapter 4.10 --- Chapter summary --- p.65Chapter 5 --- Disjunctive Argumentation Semantics II --- p.67Chapter 5.1 --- Background --- p.68Chapter 5.2 --- Definition --- p.70Chapter 5.2.1 --- Rules --- p.70Chapter 5.2.2 --- Splits --- p.71Chapter 5.3 --- Conflicts --- p.74Chapter 5.3.1 --- Undercut conflicts --- p.75Chapter 5.3.2 --- Rebuttal conflicts --- p.76Chapter 5.3.3 --- Thinning conflicts --- p.78Chapter 5.4 --- Semantics --- p.80Chapter 5.5 --- Relation to existing frameworks --- p.81Chapter 5.6 --- Issue on paraconsistency --- p.82Chapter 5.7 --- An illustrative example --- p.83Chapter 5.8 --- Chapter summary --- p.85Chapter 6 --- Evaluation --- p.86Chapter 6.1 --- Introduction --- p.86Chapter 6.2 --- Methodology --- p.87Chapter 6.3 --- DAS I --- p.88Chapter 6.3.1 --- Inoue's Benchmark problems --- p.88Chapter 6.3.2 --- Sherlock Holmes' problems --- p.96Chapter 6.4 --- DAS II --- p.100Chapter 6.4.1 --- Inoue's benchmark problems --- p.100Chapter 6.4.2 --- Sherlock Holmes' problem --- p.103Chapter 6.5 --- Analysis --- p.103Chapter 6.5.1 --- Possible extension --- p.104Chapter 6.6 --- Chapter summary --- p.106Chapter 7 --- Conclusion --- p.108Chapter 7.0.1 --- Possible extension of the present work --- p.109Bibliography --- p.117Chapter A --- First Oreder Logic (FOL) --- p.118Chapter B --- DAS-I Proof --- p.121Chapter B.1 --- Monotone proof --- p.121Chapter B.2 --- Soundness proof --- p.122Chapter B.3 --- Completeness proof --- p.123Chapter C --- Sherlock Holmes' Silver Blaze Excerpts --- p.125Chapter C.1 --- Double life --- p.125Chapter C.2 --- Poison stable boy --- p.12

    How to Reason Defeasibily

    No full text
    This paper describes the construction of a general-purpose defeasible reasoner that is complete for first-order logic and provably adequate for the argument-based conception of defeasible reasoning that I have developed elsewhere. Because the set of warranted conclusions for a defeasible reasoner will not generally be recursively enumerable, a defeasible reasoner based upon a rich logic like the predicate calculus cannot function like a traditional theorem prover and simply enumerate the warranted conclusions. An alternative criterion of adequacy called i.d.e.-adequacy is formulated. This criterion takes seriously the idea that defeasible reasoning may involve indefinitely many cycles of retracting and reinstating conclusions. It is shown how to construct a reasoner that, subject to certain realistic assumptions, is provably i.d.e.-adequate. The most recent version of OSCAR implements this system, and examples are given of OSCAR's operation

    OSCAR: An Architecture for Generally Intelligent Agents

    No full text
    Abstract OSCAR is a fully implemented architecture for a cognitive agent, based largely on the author’s work in philosophy concerning epistemology and practical cognition. The seminal idea is that a generally intelligent agent must be able to function in an environment in which it is ignorant of most matters of fact. The architecture incorporates a general-purpose defeasible reasoner, built on top of an efficient natural deduction reasoner for first-order logic. It is based upon a detailed theory about how the various aspects of epistemic and practical cognition should interact, and many of the details are driven by theoretical results concerning defeasible reasoning. The architecture is easily extensible by changing the set of inference schemes supplied to the reasoner. Existing inference schemes handle many kinds of epistemic cognition, including reasoning from perceptual input, causal reasoning and the frame problem, and reasoning defeasibly about probabilities. Work is underway to implement a system of defeasible decisiontheoretic planning
    corecore