277 research outputs found

    Argument-based Applications to Knowledge Engineering

    Get PDF
    Argumentation is concerned with reasoning in the presence of imperfect information by constructing and weighing up arguments. It is an approach for inconsistency management in which conflict is explored rather than eradicated. This form of reasoning has proved applicable to many problems in knowledge engineering that involve uncertain, incomplete or inconsistent knowledge. This paper concentrates on different issues that can be tackled by automated argumentation systems and highlights important directions in argument-oriented research in knowledge engineering. 1 Introduction One of the assumptions underlying the use of classical methods for representation and reasoning is that the information available is complete, certain and consistent. But often this is not the case. In almost every domain, there will be beliefs that are not categorical; rules that are incomplete, with unknown or implicit conditions; and conclusions that are contradictory. Therefore, we need alternative know..

    Evidentialist Foundationalist Argumentation in Multi-Agent Systems

    Get PDF
    This dissertation focuses on the explicit grounding of reasoning in evidence directly sensed from the physical world. Based on evidence from human problem solving and successes, this is a straightforward basis for reasoning: to solve problems in the physical world, the information required for solving them must also come from the physical world. What is less straightforward is how to structure the path from evidence to conclusions. Many approaches have been applied to evidence-based reasoning, including probabilistic graphical models and Dempster-Shafer theory. However, with some exceptions, these traditional approaches are often employed to establish confidence in a single binary conclusion, like whether or not there is a blizzard, rather than developing complex groups of scalar conclusions, like where a blizzard's center is, what area it covers, how strong it is, and what components it has. To form conclusions of the latter kind, we employ and further develop the approach of Computational Argumentation. Specifically, this dissertation develops a novel approach to evidence-based argumentation called Evidentialist Foundationalist Argumentation (EFA). The method is a formal instantiation of the well-established Argumentation Service Platform with Integrated Components (ASPIC) framework. There are two primary approaches to Computational Argumentation. One approach is structured argumentation where arguments are structured with premises, inference rules, conclusions, and arguments based on the conclusions of other arguments, creating a tree-like structure. The other approach is abstract argumentation where arguments interact at a higher level through an attack relation. ASPIC unifies the two approaches. EFA instantiates ASPIC specifically for the purpose of reasoning about physical evidence in the form of sensor data. By restricting ASPIC specifically to sensor data, special philosophical and computational advantages are gained. Specifically, all premises in the system (evidence) can be treated as firmly grounded axioms and all arguments' conclusions can be numerically calculated directly from their premises. EFA could be used as the basis for well-justified, transparent reasoning in many domains including engineering, law, business, medicine, politics, and education. To test its utility as a basis for Computational Argumentation, we apply EFA to a Multi-Agent System working in the problem domain of Sensor Webs on the specific problem of Decentralized Sensor Fusion. In the Multi-Agent Decentralized Sensor Fusion problem, groups of individual agents are assigned to sensor stations that are distributed across a geographical area, forming a Sensor Web. The goal of the system is to strategically share sensor readings between agents to increase the accuracy of each individual agent's model of the geophysical sensing situation. For example, if there is a severe storm, a goal may be for each agent to have an accurate model of the storm's heading, severity, and focal points of activity. Also, since the agents are controlling a Sensor Web, another goal is to use communication judiciously so as to use power efficiently. To meet these goals, we design a Multi-Agent System called Investigative Argumentation-based Negotiating Agents (IANA). Agents in IANA use EFA as the basis for establishing arguments to model geophysical situations. Upon gathering evidence in the form of sensor readings, the agents form evidence-based arguments using EFA. The agents systematically compare the conclusions of their arguments to other agents. If the agents sufficiently agree on the geophysical situation, they end communication. If they disagree, then they share the evidence for their conclusions, consuming communication resources with the goal of increasing accuracy. They execute this interaction using a Share on Disagreement (SoD) protocol. IANA is evaluated against two other Multi-Agent System approaches on the basis of accuracy and communication costs, using historical real-world weather data. The first approach is all-to-all communication, called the Complete Data Sharing (CDS) approach. In this system, agents share all observations, maximizing accuracy but at a high communication cost. The second approach is based on Kalman Filtering of conclusions and is called the Conclusion Negotiation Only (CNO) approach. In this system, agents do not share any observations, and instead try to infer the geophysical state based only on each other's conclusions. This approach saves communication costs but sacrifices accuracy. The results of these experiments have been statistically analyzed using omega-squared effect sizes produced by ANOVA with p-values < 0.05. The IANA system was found to outperform the CDS system for message cost with high effect sizes. The CDS system outperformed the IANA system for accuracy with only small effect sizes. The IANA system was found to outperform the CNO system for accuracy with mostly high and medium effect sizes. The CNO system outperformed the IANA system for message costs with only small effect sizes. Given these results, the IANA system is preferable for most of the testing scenarios for the problem solved in this dissertation

    Arguing Using Opponent Models

    Get PDF
    Peer reviewedPostprin

    Logic-based Technologies for Intelligent Systems: State of the Art and Perspectives

    Get PDF
    Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future

    A multi-demand negotiation model based on fuzzy rules elicited via psychological experiments

    Get PDF
    This paper proposes a multi-demand negotiation model that takes the effect of human users’ psychological characteristics into consideration. Specifically, in our model each negotiating agent's preference over its demands can be changed, according to human users’ attitudes to risk, patience and regret, during the course of a negotiation. And the change of preference structures is determined by fuzzy logic rules, which are elicited through our psychological experiments. The applicability of our model is illustrated by using our model to solve a problem of political negotiation between two countries. Moreover, we do lots of theoretical and empirical analyses to reveal some insights into our model. In addition, to compare our model with existing ones, we make a survey on fuzzy logic based negotiation, and discuss the similarities and differences between our negotiation model and various consensus models

    Defeasible Argumentation for Cooperative Multi-Agent Planning

    Full text link
    Tesis por compendio[EN] Multi-Agent Systems (MAS), Argumentation and Automated Planning are three lines of investigations within the field of Artificial Intelligence (AI) that have been extensively studied over the last years. A MAS is a system composed of multiple intelligent agents that interact with each other and it is used to solve problems whose solution requires the presence of various functional and autonomous entities. Multi-agent systems can be used to solve problems that are difficult or impossible to resolve for an individual agent. On the other hand, Argumentation refers to the construction and subsequent exchange (iteratively) of arguments between a group of agents, with the aim of arguing for or against a particular proposal. Regarding Automated Planning, given an initial state of the world, a goal to achieve, and a set of possible actions, the goal is to build programs that can automatically calculate a plan to reach the final state from the initial state. The main objective of this thesis is to propose a model that combines and integrates these three research lines. More specifically, we consider a MAS as a team of agents with planning and argumentation capabilities. In that sense, given a planning problem with a set of objectives, (cooperative) agents jointly construct a plan to satisfy the objectives of the problem while they defeasibly reason about the environmental conditions so as to provide a stronger guarantee of success of the plan at execution time. Therefore, the goal is to use the planning knowledge to build a plan while agents beliefs about the impact of unexpected environmental conditions is used to select the plan which is less likely to fail at execution time. Thus, the system is intended to return collaborative plans that are more robust and adapted to the circumstances of the execution environment. In this thesis, we designed, built and evaluated a model of argumentation based on defeasible reasoning for planning cooperative multi-agent system. The designed system is independent of the domain, thus demonstrating the ability to solve problems in different application contexts. Specifically, the system has been tested in context sensitive domains such as Ambient Intelligence as well as with problems used in the International Planning Competitions.[ES] Dentro de la Inteligencia Artificial (IA), existen tres ramas que han sido ampliamente estudiadas en los últimos años: Sistemas Multi-Agente (SMA), Argumentación y Planificación Automática. Un SMA es un sistema compuesto por múltiples agentes inteligentes que interactúan entre sí y se utilizan para resolver problemas cuya solución requiere la presencia de diversas entidades funcionales y autónomas. Los sistemas multiagente pueden ser utilizados para resolver problemas que son difíciles o imposibles de resolver para un agente individual. Por otra parte, la Argumentación consiste en la construcción y posterior intercambio (iterativamente) de argumentos entre un conjunto de agentes, con el objetivo de razonar a favor o en contra de una determinada propuesta. Con respecto a la Planificación Automática, dado un estado inicial del mundo, un objetivo a alcanzar, y un conjunto de acciones posibles, el objetivo es construir programas capaces de calcular de forma automática un plan que permita alcanzar el estado final a partir del estado inicial. El principal objetivo de esta tesis es proponer un modelo que combine e integre las tres líneas anteriores. Más específicamente, nosotros consideramos un SMA como un equipo de agentes con capacidades de planificación y argumentación. En ese sentido, dado un problema de planificación con un conjunto de objetivos, los agentes (cooperativos) construyen conjuntamente un plan para resolver los objetivos del problema y, al mismo tiempo, razonan sobre la viabilidad de los planes, utilizando como herramienta de diálogo la Argumentación. Por tanto, el objetivo no es sólo obtener automáticamente un plan solución generado de forma colaborativa entre los agentes, sino también utilizar las creencias de los agentes sobre la información del contexto para razonar acerca de la viabilidad de los planes en su futura etapa de ejecución. De esta forma, se pretende que el sistema sea capaz de devolver planes colaborativos más robustos y adaptados a las circunstancias del entorno de ejecución. En esta tesis se diseña, construye y evalúa un modelo de argumentación basado en razonamiento defeasible para un sistema de planificación cooperativa multiagente. El sistema diseñado es independiente del dominio, demostrando así la capacidad de resolver problemas en diferentes contextos de aplicación. Concretamente el sistema se ha evaluado en dominios sensibles al contexto como es la Inteligencia Ambiental y en problemas de las competiciones internacionales de planificación.[CA] Dins de la intel·ligència artificial (IA), hi han tres branques que han sigut àmpliament estudiades en els últims anys: Sistemes Multi-Agent (SMA), Argumentació i Planificació Automàtica. Un SMA es un sistema compost per múltiples agents intel·ligents que interactúen entre si i s'utilitzen per a resoldre problemas la solución dels quals requereix la presència de diverses entitats funcionals i autònomes. Els sistemes multiagente poden ser utilitzats per a resoldre problemes que són difícils o impossibles de resoldre per a un agent individual. D'altra banda, l'Argumentació consistiex en la construcció i posterior intercanvi (iterativament) d'arguments entre un conjunt d'agents, amb l'objectiu de raonar a favor o en contra d'una determinada proposta. Respecte a la Planificació Automàtica, donat un estat inicial del món, un objectiu a aconseguir, i un conjunt d'accions possibles, l'objectiu és construir programes capaços de calcular de forma automàtica un pla que permeta aconseguir l'estat final a partir de l'estat inicial. El principal objectiu d'aquesta tesi és proposar un model que combine i integre les tres línies anteriors. Més específicament, nosaltres considerem un SMA com un equip d'agents amb capacitats de planificació i argumentació. En aquest sentit, donat un problema de planificació amb un conjunt d'objectius, els agents (cooperatius) construeixen conjuntament un pla per a resoldre els objectius del problema i, al mateix temps, raonen sobre la viabilitat dels plans, utilitzant com a ferramenta de diàleg l'Argumentació. Per tant, l'objectiu no és només obtindre automàticament un pla solució generat de forma col·laborativa entre els agents, sinó també utilitzar les creences dels agents sobre la informació del context per a raonar sobre la viabilitat dels plans en la seua futura etapa d'execució. D'aquesta manera, es pretén que el sistema siga capaç de tornar plans col·laboratius més robustos i adaptats a les circumstàncies de l'entorn d'execució. En aquesta tesi es dissenya, construeix i avalua un model d'argumentació basat en raonament defeasible per a un sistema de planificació cooperativa multiagent. El sistema dissenyat és independent del domini, demostrant així la capacitat de resoldre problemes en diferents contextos d'aplicació. Concretament el sistema s'ha avaluat en dominis sensibles al context com és la inte·ligència Ambiental i en problemes de les competicions internacionals de planificació.Pajares Ferrando, S. (2016). Defeasible Argumentation for Cooperative Multi-Agent Planning [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/60159TESISCompendi

    Challenges for a CBR framework for argumentation in open MAS

    Full text link
    [EN] Nowadays, Multi-Agent Systems (MAS) are broadening their applications to open environments, where heterogeneous agents could enter into the system, form agents’ organizations and interact. The high dynamism of open MAS gives rise to potential conflicts between agents and thus, to a need for a mechanism to reach agreements. Argumentation is a natural way of harmonizing conflicts of opinion that has been applied to many disciplines, such as Case-Based Reasoning (CBR) and MAS. Some approaches that apply CBR to manage argumentation in MAS have been proposed in the literature. These improve agents’ argumentation skills by allowing them to reason and learn from experiences. In this paper, we have reviewed these approaches and identified the current contributions of the CBR methodology in this area. As a result of this work, we have proposed several open issues that must be taken into consideration to develop a CBR framework that provides the agents of an open MAS with arguing and learning capabilities.This work was partially supported by CONSOLIDER-INGENIO 2010 under grant CSD2007-00022 and by the Spanish government and FEDER funds under TIN2006-14630-C0301 project.Heras Barberá, SM.; Botti Navarro, VJ.; Julian Inglada, VJ. (2009). Challenges for a CBR framework for argumentation in open MAS. Knowledge Engineering Review. 24(4):327-352. https://doi.org/10.1017/S0269888909990178S327352244Willmott S. , Vreeswijk G. , Chesñevar C. , South M. , McGinnis J. , Modgil S. , Rahwan I. , Reed C. , Simari G. 2006. Towards an argument interchange format for multi-agent systems. In Proceedings of the AAMAS International Workshop on Argumentation in Multi-Agent Systems, ArgMAS-06, 17–34.Sycara, K. P. (1990). Persuasive argumentation in negotiation. Theory and Decision, 28(3), 203-242. doi:10.1007/bf00162699Ontañón S. , Plaza E. 2006. Arguments and counterexamples in case-based joint deliberation. In AAMAS-06 Workshop on Argumentation in Multi-Agent Systems, ArgMAS-06, 36–53.Sadri F. , Toni F. , Torroni P. 2001. Dialogues for negotiation: agent varieties and dialogue sequences. In Proceedings of the 8th International Workshop on Agent Theories, Architectures, and Languages, ATAL-01, Intelligent Agents VIII 2333, 405–421. Springer.Fox J. , Parsons S. 1998. Arguing about beliefs and actions. In Applications of Uncertainty Formalisms, Lecture Notes in Computer Science 1455, 266–302. Springer.Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2), 321-357. doi:10.1016/0004-3702(94)00041-xAulinas M. , Tolchinsky P. , Turon C. , Poch M. , Cortés U. 2007. Is my spill environmentally safe? Towards an integrated management of wastewater in a river basin using agents that can argue. In 7th International IWA Symposium on Systems Analysis and Integrated Assessment in Water Management. Washington DC, USA.Amgoud L. 2003. A formal framework for handling conflicting desires. In Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Lecture Notes in Computer Science 2711, 552–563. Springer.Armengol E. , Plaza E. 2001. Lazy induction of descriptions for relational case-based learning. In European Conference on Machine Learning, ECML-01, 13–24.Sørmo, F., Cassens, J., & Aamodt, A. (2005). Explanation in Case-Based Reasoning–Perspectives and Goals. Artificial Intelligence Review, 24(2), 109-143. doi:10.1007/s10462-005-4607-7RAHWAN, I., RAMCHURN, S. D., JENNINGS, N. R., McBURNEY, P., PARSONS, S., & SONENBERG, L. (2003). Argumentation-based negotiation. The Knowledge Engineering Review, 18(4), 343-375. doi:10.1017/s0269888904000098Brüninghaus S. , Ashley K. D. 2001. Improving the representation of legal case texts with information extraction methods. In 7th International Conference on Artificial Intelligence and Law, ICAIL-01, 42–51.Parsons, S. (1998). Agents that reason and negotiate by arguing. Journal of Logic and Computation, 8(3), 261-292. doi:10.1093/logcom/8.3.261Atkinson, K., Bench-Capon, T., & Mcburney, P. (2005). A Dialogue Game Protocol for Multi-Agent Argument over Proposals for Action. Autonomous Agents and Multi-Agent Systems, 11(2), 153-171. doi:10.1007/s10458-005-1166-xBrüninghaus S. , Ashley K. D. 2003. Predicting the outcome of case-based legal arguments. In 9th International Conference on Artificial Intelligence and Law, ICAIL-03, 233–242.Modgil S. , Tolchinsky P. , Cortés U. 2005. Towards formalising agent argumentation over the viability of human organs for transplantation. In 4th Mexican International Conference on Artificial Intelligence, MICAI-05, 928–938.Tolchinsky P. , Atkinson K. , McBurney P. , Modgil S. , Cortés U. 2007. Agents deliberating over action proposals using the ProCLAIM model. In 5th International Central and Eastern European Conference on Multi-Agent Systems, CEEMAS-07, 32–41.Prakken, H., & Sartor, G. (1998). Artificial Intelligence and Law, 6(2/4), 231-287. doi:10.1023/a:1008278309945Gordon T. F. , Karacapilidis N. 1997. The Zeno argumentation framework. In International Conference on Artificial Intelligence and Law, ICAIL-97, ACM Press, 10–18.Tolchinsky P. , Modgil S. , Cortés U. 2006a. Argument schemes and critical questions for heterogeneous agents to argue over the viability of a human organ. In AAAI Spring Symposium Series; Argumentation for Consumers of Healthcare, 377–384.Aleven V. , Ashley K. D. 1997. Teaching case-based argumentation through a model and examples, empirical evaluation of an intelligent learning environment. In 8th World Conference of the Artificial Intelligence in Education Society, 87–94.Rahwan, I. (2005). Guest Editorial: Argumentation in Multi-Agent Systems. Autonomous Agents and Multi-Agent Systems, 11(2), 115-125. doi:10.1007/s10458-005-3079-0RISSLAND, E. L., ASHLEY, K. D., & BRANTING, L. K. (2005). Case-based reasoning and law. The Knowledge Engineering Review, 20(3), 293-298. doi:10.1017/s0269888906000701Tolchinsky, P., Cortes, U., Modgil, S., Caballero, F., & Lopez-Navidad, A. (2006). Increasing Human-Organ Transplant Availability: Argumentation-Based Agent Deliberation. IEEE Intelligent Systems, 21(6), 30-37. doi:10.1109/mis.2006.116McBurney, P., Hitchcock, D., & Parsons, S. (2006). The eightfold way of deliberation dialogue. International Journal of Intelligent Systems, 22(1), 95-132. doi:10.1002/int.20191Rissland, E. L., Ashley, K. D., & Loui, R. P. (2003). AI and Law: A fruitful synergy. Artificial Intelligence, 150(1-2), 1-15. doi:10.1016/s0004-3702(03)00122-xSoh, L.-K., & Tsatsoulis, C. (2005). A Real-Time Negotiation Model and A Multi-Agent Sensor Network Implementation. Autonomous Agents and Multi-Agent Systems, 11(3), 215-271. doi:10.1007/s10458-005-0539-5Capobianco, M., Chesñevar, C. I., & Simari, G. R. (2005). Argumentation and the Dynamics of Warranted Beliefs in Changing Environments. Autonomous Agents and Multi-Agent Systems, 11(2), 127-151. doi:10.1007/s10458-005-1354-8Tolchinsky P. , Modgil S. , Cortés U. , Sànchez-Marrè M. 2006b. CBR and argument schemes for collaborative decision making. In Conference on Computational Models of Argument, COMMA-06, 144, 71–82. IOS Press.Ossowski S. , Julian V. , Bajo J. , Billhardt H. , Botti V. , Corchado J. M. 2007. Open issues in open MAS: an abstract architecture proposal. In Conferencia de la Asociacion Española para la Inteligencia Artificial, CAEPIA-07, 2, 151–160.Karacapilidis, N., & Papadias, D. (2001). Computer supported argumentation and collaborative decision making: the HERMES system. Information Systems, 26(4), 259-277. doi:10.1016/s0306-4379(01)00020-5Aamodt A. 2004. Knowledge-intensive case-based reasoning in Creek. In 7th European Conference on Case-Based Reasoning ECCBR-04, 1–15.Jakobovits H. , Vermeir D. 1999. Dialectic semantics for argumentation frameworks. In Proceedings of the 7th International Conference on Artificial Intelligence and Law, ICAIL-99, ACM Press, 53–62.Díaz-Agudo, B., & González-Calero, P. A. (s. f.). An Ontological Approach to Develop Knowledge Intensive CBR Systems. Ontologies, 173-213. doi:10.1007/978-0-387-37022-4_7Reed C. , Walton D. 2005. Towards a formal and implemented model of argumentation schemes in agent communication. In Proceedings of the 1st International Workshop in Multi-Agent Systems, ArgMAS-04, 173–188.Sycara K. 1989. Argumentation: planning other agents’ plans. In 11th International Joint Conference on Artificial Intelligence, 1, 517–523. Morgan Kaufmann Publishers, Inc.Bench-Capon, T. J. M., & Dunne, P. E. (2007). Argumentation in artificial intelligence. Artificial Intelligence, 171(10-15), 619-641. doi:10.1016/j.artint.2007.05.001Reiter, R. (1980). A logic for default reasoning. Artificial Intelligence, 13(1-2), 81-132. doi:10.1016/0004-3702(80)90014-4Amgoud L. , Kaci S. 2004. On the generation of bipolar goals in argumentation-based negotiation. In 1st International Workshop on Argumentation in Multi-Agent Systems, ArgMAS, Lecture Notes in Computer Science 3366, 192–207. Springer.CHESÑEVAR, C., MCGINNIS, MODGIL, S., RAHWAN, I., REED, C., SIMARI, G., … WILLMOTT, S. (2006). Towards an argument interchange format. The Knowledge Engineering Review, 21(4), 293-316. doi:10.1017/s0269888906001044Rahwan I. , Amgoud L. 2006. An argumentation-based approach for practical reasoning. In Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS-06, ACM Press, 347–354.Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155-169. doi:10.1007/bf01405730Soh L.-K. , Tsatsoulis C. 2001b. Reflective negotiating agents for real-time multisensor target tracking. In International Joint Conference on Artificial Intelligence, IJCAI-01, 1121–1127.Eemeren, F. H. van, & Grootendorst, R. (1984). Speech Acts in Argumentative Discussions. doi:10.1515/9783110846089Rissland E. L. , Skalak D. B. , Friedman M. T. 1993. Bankxx: a program to generate argument through case-based search. In International Conference on Artificial Intelligence and Law, ICAIL-93, 117–124.Sycara K. 1987. Resolving Adversarial Conflicts: An Approach Integrating Case-Based and Analytic Methods, PhD thesis, School of Information and Computer Science. Georgia Institute of Technology.Ontañón S. , Plaza E. 2007. Learning and joint deliberation through argumentation in multi-agent systems. In International Conference on Autonomous Agents and Multiagent Systems, AAMAS-07, 971–978.Rissland, E. L., & Skalak, D. B. (1991). CABARET: rule interpretation in a hybrid architecture. International Journal of Man-Machine Studies, 34(6), 839-887. doi:10.1016/0020-7373(91)90013-wDaniels J. J. , Rissland E. L. 1997. Finding legally relevant passages in case opinions. In 6th International Conference on Artificial Intelligence and Law, ICAIL-97, 39–47.Brüninghaus S. , Ashley K. D. 2005. Generating legal arguments and predictions from case texts. In 10th International Conference on Artificial Intelligence and Law, ICAIL-05, 65–74.Simari G. R. , García A. J. , Capobianco M. 2004. Actions, planning and defeasible reasoning. In Proceedings of the 10th International Workshop on Non-monotonic Reasoning, NMR-04, 377–384.Soh L.-K. , Tsatsoulis C. 2001a. Agent-based argumentative negotiations with case-based reasoning. In AAAI Fall Symposium on Negotiation Methods for Autonomous Cooperative Systems, 16–25.Ashley, K. D. (1991). Reasoning with cases and hypotheticals in HYPO. International Journal of Man-Machine Studies, 34(6), 753-796. doi:10.1016/0020-7373(91)90011-uHulstijn J. , van der Torre L. 2004, Combining goal generation and planning in an argumentation framework. In Proceedings of the Workshop on Argument, Dialogue and Decision. International Workshop on Non-monotonic Reasoning, NMR-04, 212–218.Karacapilidis N. , Trousse B. , Papadias D. 1997. Using case-based reasoning for argumentation with multiple viewpoints. In 2nd International Conference on Case-Based Reasoning, ICCBR-97, 541–552.Branting, L. K. (1991). Building explanations from rules and structured cases. International Journal of Man-Machine Studies, 34(6), 797-837. doi:10.1016/0020-7373(91)90012-
    corecore