82,184 research outputs found

    Challenges for a CBR framework for argumentation in open MAS

    Full text link
    [EN] Nowadays, Multi-Agent Systems (MAS) are broadening their applications to open environments, where heterogeneous agents could enter into the system, form agents’ organizations and interact. The high dynamism of open MAS gives rise to potential conflicts between agents and thus, to a need for a mechanism to reach agreements. Argumentation is a natural way of harmonizing conflicts of opinion that has been applied to many disciplines, such as Case-Based Reasoning (CBR) and MAS. Some approaches that apply CBR to manage argumentation in MAS have been proposed in the literature. These improve agents’ argumentation skills by allowing them to reason and learn from experiences. In this paper, we have reviewed these approaches and identified the current contributions of the CBR methodology in this area. As a result of this work, we have proposed several open issues that must be taken into consideration to develop a CBR framework that provides the agents of an open MAS with arguing and learning capabilities.This work was partially supported by CONSOLIDER-INGENIO 2010 under grant CSD2007-00022 and by the Spanish government and FEDER funds under TIN2006-14630-C0301 project.Heras Barberá, SM.; Botti Navarro, VJ.; Julian Inglada, VJ. (2009). Challenges for a CBR framework for argumentation in open MAS. Knowledge Engineering Review. 24(4):327-352. https://doi.org/10.1017/S0269888909990178S327352244Willmott S. , Vreeswijk G. , Chesñevar C. , South M. , McGinnis J. , Modgil S. , Rahwan I. , Reed C. , Simari G. 2006. Towards an argument interchange format for multi-agent systems. In Proceedings of the AAMAS International Workshop on Argumentation in Multi-Agent Systems, ArgMAS-06, 17–34.Sycara, K. P. (1990). Persuasive argumentation in negotiation. Theory and Decision, 28(3), 203-242. doi:10.1007/bf00162699Ontañón S. , Plaza E. 2006. Arguments and counterexamples in case-based joint deliberation. In AAMAS-06 Workshop on Argumentation in Multi-Agent Systems, ArgMAS-06, 36–53.Sadri F. , Toni F. , Torroni P. 2001. Dialogues for negotiation: agent varieties and dialogue sequences. In Proceedings of the 8th International Workshop on Agent Theories, Architectures, and Languages, ATAL-01, Intelligent Agents VIII 2333, 405–421. Springer.Fox J. , Parsons S. 1998. Arguing about beliefs and actions. In Applications of Uncertainty Formalisms, Lecture Notes in Computer Science 1455, 266–302. Springer.Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2), 321-357. doi:10.1016/0004-3702(94)00041-xAulinas M. , Tolchinsky P. , Turon C. , Poch M. , Cortés U. 2007. Is my spill environmentally safe? Towards an integrated management of wastewater in a river basin using agents that can argue. In 7th International IWA Symposium on Systems Analysis and Integrated Assessment in Water Management. Washington DC, USA.Amgoud L. 2003. A formal framework for handling conflicting desires. In Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Lecture Notes in Computer Science 2711, 552–563. Springer.Armengol E. , Plaza E. 2001. Lazy induction of descriptions for relational case-based learning. In European Conference on Machine Learning, ECML-01, 13–24.Sørmo, F., Cassens, J., & Aamodt, A. (2005). Explanation in Case-Based Reasoning–Perspectives and Goals. Artificial Intelligence Review, 24(2), 109-143. doi:10.1007/s10462-005-4607-7RAHWAN, I., RAMCHURN, S. D., JENNINGS, N. R., McBURNEY, P., PARSONS, S., & SONENBERG, L. (2003). Argumentation-based negotiation. The Knowledge Engineering Review, 18(4), 343-375. doi:10.1017/s0269888904000098Brüninghaus S. , Ashley K. D. 2001. Improving the representation of legal case texts with information extraction methods. In 7th International Conference on Artificial Intelligence and Law, ICAIL-01, 42–51.Parsons, S. (1998). Agents that reason and negotiate by arguing. Journal of Logic and Computation, 8(3), 261-292. doi:10.1093/logcom/8.3.261Atkinson, K., Bench-Capon, T., & Mcburney, P. (2005). A Dialogue Game Protocol for Multi-Agent Argument over Proposals for Action. Autonomous Agents and Multi-Agent Systems, 11(2), 153-171. doi:10.1007/s10458-005-1166-xBrüninghaus S. , Ashley K. D. 2003. Predicting the outcome of case-based legal arguments. In 9th International Conference on Artificial Intelligence and Law, ICAIL-03, 233–242.Modgil S. , Tolchinsky P. , Cortés U. 2005. Towards formalising agent argumentation over the viability of human organs for transplantation. In 4th Mexican International Conference on Artificial Intelligence, MICAI-05, 928–938.Tolchinsky P. , Atkinson K. , McBurney P. , Modgil S. , Cortés U. 2007. Agents deliberating over action proposals using the ProCLAIM model. In 5th International Central and Eastern European Conference on Multi-Agent Systems, CEEMAS-07, 32–41.Prakken, H., & Sartor, G. (1998). Artificial Intelligence and Law, 6(2/4), 231-287. doi:10.1023/a:1008278309945Gordon T. F. , Karacapilidis N. 1997. The Zeno argumentation framework. In International Conference on Artificial Intelligence and Law, ICAIL-97, ACM Press, 10–18.Tolchinsky P. , Modgil S. , Cortés U. 2006a. Argument schemes and critical questions for heterogeneous agents to argue over the viability of a human organ. In AAAI Spring Symposium Series; Argumentation for Consumers of Healthcare, 377–384.Aleven V. , Ashley K. D. 1997. Teaching case-based argumentation through a model and examples, empirical evaluation of an intelligent learning environment. In 8th World Conference of the Artificial Intelligence in Education Society, 87–94.Rahwan, I. (2005). Guest Editorial: Argumentation in Multi-Agent Systems. Autonomous Agents and Multi-Agent Systems, 11(2), 115-125. doi:10.1007/s10458-005-3079-0RISSLAND, E. L., ASHLEY, K. D., & BRANTING, L. K. (2005). Case-based reasoning and law. The Knowledge Engineering Review, 20(3), 293-298. doi:10.1017/s0269888906000701Tolchinsky, P., Cortes, U., Modgil, S., Caballero, F., & Lopez-Navidad, A. (2006). Increasing Human-Organ Transplant Availability: Argumentation-Based Agent Deliberation. IEEE Intelligent Systems, 21(6), 30-37. doi:10.1109/mis.2006.116McBurney, P., Hitchcock, D., & Parsons, S. (2006). The eightfold way of deliberation dialogue. International Journal of Intelligent Systems, 22(1), 95-132. doi:10.1002/int.20191Rissland, E. L., Ashley, K. D., & Loui, R. P. (2003). AI and Law: A fruitful synergy. Artificial Intelligence, 150(1-2), 1-15. doi:10.1016/s0004-3702(03)00122-xSoh, L.-K., & Tsatsoulis, C. (2005). A Real-Time Negotiation Model and A Multi-Agent Sensor Network Implementation. Autonomous Agents and Multi-Agent Systems, 11(3), 215-271. doi:10.1007/s10458-005-0539-5Capobianco, M., Chesñevar, C. I., & Simari, G. R. (2005). Argumentation and the Dynamics of Warranted Beliefs in Changing Environments. Autonomous Agents and Multi-Agent Systems, 11(2), 127-151. doi:10.1007/s10458-005-1354-8Tolchinsky P. , Modgil S. , Cortés U. , Sànchez-Marrè M. 2006b. CBR and argument schemes for collaborative decision making. In Conference on Computational Models of Argument, COMMA-06, 144, 71–82. IOS Press.Ossowski S. , Julian V. , Bajo J. , Billhardt H. , Botti V. , Corchado J. M. 2007. Open issues in open MAS: an abstract architecture proposal. In Conferencia de la Asociacion Española para la Inteligencia Artificial, CAEPIA-07, 2, 151–160.Karacapilidis, N., & Papadias, D. (2001). Computer supported argumentation and collaborative decision making: the HERMES system. Information Systems, 26(4), 259-277. doi:10.1016/s0306-4379(01)00020-5Aamodt A. 2004. Knowledge-intensive case-based reasoning in Creek. In 7th European Conference on Case-Based Reasoning ECCBR-04, 1–15.Jakobovits H. , Vermeir D. 1999. Dialectic semantics for argumentation frameworks. In Proceedings of the 7th International Conference on Artificial Intelligence and Law, ICAIL-99, ACM Press, 53–62.Díaz-Agudo, B., & González-Calero, P. A. (s. f.). An Ontological Approach to Develop Knowledge Intensive CBR Systems. Ontologies, 173-213. doi:10.1007/978-0-387-37022-4_7Reed C. , Walton D. 2005. Towards a formal and implemented model of argumentation schemes in agent communication. In Proceedings of the 1st International Workshop in Multi-Agent Systems, ArgMAS-04, 173–188.Sycara K. 1989. Argumentation: planning other agents’ plans. In 11th International Joint Conference on Artificial Intelligence, 1, 517–523. Morgan Kaufmann Publishers, Inc.Bench-Capon, T. J. M., & Dunne, P. E. (2007). Argumentation in artificial intelligence. Artificial Intelligence, 171(10-15), 619-641. doi:10.1016/j.artint.2007.05.001Reiter, R. (1980). A logic for default reasoning. Artificial Intelligence, 13(1-2), 81-132. doi:10.1016/0004-3702(80)90014-4Amgoud L. , Kaci S. 2004. On the generation of bipolar goals in argumentation-based negotiation. In 1st International Workshop on Argumentation in Multi-Agent Systems, ArgMAS, Lecture Notes in Computer Science 3366, 192–207. Springer.CHESÑEVAR, C., MCGINNIS, MODGIL, S., RAHWAN, I., REED, C., SIMARI, G., … WILLMOTT, S. (2006). Towards an argument interchange format. The Knowledge Engineering Review, 21(4), 293-316. doi:10.1017/s0269888906001044Rahwan I. , Amgoud L. 2006. An argumentation-based approach for practical reasoning. In Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS-06, ACM Press, 347–354.Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155-169. doi:10.1007/bf01405730Soh L.-K. , Tsatsoulis C. 2001b. Reflective negotiating agents for real-time multisensor target tracking. In International Joint Conference on Artificial Intelligence, IJCAI-01, 1121–1127.Eemeren, F. H. van, & Grootendorst, R. (1984). Speech Acts in Argumentative Discussions. doi:10.1515/9783110846089Rissland E. L. , Skalak D. B. , Friedman M. T. 1993. Bankxx: a program to generate argument through case-based search. In International Conference on Artificial Intelligence and Law, ICAIL-93, 117–124.Sycara K. 1987. Resolving Adversarial Conflicts: An Approach Integrating Case-Based and Analytic Methods, PhD thesis, School of Information and Computer Science. Georgia Institute of Technology.Ontañón S. , Plaza E. 2007. Learning and joint deliberation through argumentation in multi-agent systems. In International Conference on Autonomous Agents and Multiagent Systems, AAMAS-07, 971–978.Rissland, E. L., & Skalak, D. B. (1991). CABARET: rule interpretation in a hybrid architecture. International Journal of Man-Machine Studies, 34(6), 839-887. doi:10.1016/0020-7373(91)90013-wDaniels J. J. , Rissland E. L. 1997. Finding legally relevant passages in case opinions. In 6th International Conference on Artificial Intelligence and Law, ICAIL-97, 39–47.Brüninghaus S. , Ashley K. D. 2005. Generating legal arguments and predictions from case texts. In 10th International Conference on Artificial Intelligence and Law, ICAIL-05, 65–74.Simari G. R. , García A. J. , Capobianco M. 2004. Actions, planning and defeasible reasoning. In Proceedings of the 10th International Workshop on Non-monotonic Reasoning, NMR-04, 377–384.Soh L.-K. , Tsatsoulis C. 2001a. Agent-based argumentative negotiations with case-based reasoning. In AAAI Fall Symposium on Negotiation Methods for Autonomous Cooperative Systems, 16–25.Ashley, K. D. (1991). Reasoning with cases and hypotheticals in HYPO. International Journal of Man-Machine Studies, 34(6), 753-796. doi:10.1016/0020-7373(91)90011-uHulstijn J. , van der Torre L. 2004, Combining goal generation and planning in an argumentation framework. In Proceedings of the Workshop on Argument, Dialogue and Decision. International Workshop on Non-monotonic Reasoning, NMR-04, 212–218.Karacapilidis N. , Trousse B. , Papadias D. 1997. Using case-based reasoning for argumentation with multiple viewpoints. In 2nd International Conference on Case-Based Reasoning, ICCBR-97, 541–552.Branting, L. K. (1991). Building explanations from rules and structured cases. International Journal of Man-Machine Studies, 34(6), 797-837. doi:10.1016/0020-7373(91)90012-

    How much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles

    Get PDF
    Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities

    Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics

    Get PDF
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), in 2. I introduce the main points of the methodological debate which opposes pragmatism and essentialism in the regulation of robotics and I examine how legal fictions are framed from a pragmatist, functional perspective. Since this approach entails a neat separation of ontological analysis and legal rea- soning, in 3. I discuss whether considerations on robots’ essence are actually put into brackets when the pragmatist approach is endorsed. Finally, in 4. I address the problem of the social valence of legal fictions in order to suggest a possible limit of the pragmatist approach. My conclusion (5.) is that in the specific case of regulating robotics it may be very difficult to separate ontological considerations from legal reasoning—and vice versa—both on an epistemological and social level. This calls for great caution in the recourse to anthropomorphic legal fictions

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Research Priorities for Robust and Beneficial Artificial Intelligence

    Get PDF
    Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.Comment: This article gives examples of the type of research advocated by the open letter for robust & beneficial AI at http://futureoflife.org/ai-open-lette

    A Labelling Framework for Probabilistic Argumentation

    Full text link
    The combination of argumentation and probability paves the way to new accounts of qualitative and quantitative uncertainty, thereby offering new theoretical and applicative opportunities. Due to a variety of interests, probabilistic argumentation is approached in the literature with different frameworks, pertaining to structured and abstract argumentation, and with respect to diverse types of uncertainty, in particular the uncertainty on the credibility of the premises, the uncertainty about which arguments to consider, and the uncertainty on the acceptance status of arguments or statements. Towards a general framework for probabilistic argumentation, we investigate a labelling-oriented framework encompassing a basic setting for rule-based argumentation and its (semi-) abstract account, along with diverse types of uncertainty. Our framework provides a systematic treatment of various kinds of uncertainty and of their relationships and allows us to back or question assertions from the literature
    • …
    corecore