2,842 research outputs found

    A Two-Phase Dialogue Game for Skeptical Preferred Semantics

    Get PDF
    Postprin

    Historical overview of formal argumentation

    Get PDF

    Historical overview of formal argumentation

    Get PDF

    A Labelling Framework for Probabilistic Argumentation

    Full text link
    The combination of argumentation and probability paves the way to new accounts of qualitative and quantitative uncertainty, thereby offering new theoretical and applicative opportunities. Due to a variety of interests, probabilistic argumentation is approached in the literature with different frameworks, pertaining to structured and abstract argumentation, and with respect to diverse types of uncertainty, in particular the uncertainty on the credibility of the premises, the uncertainty about which arguments to consider, and the uncertainty on the acceptance status of arguments or statements. Towards a general framework for probabilistic argumentation, we investigate a labelling-oriented framework encompassing a basic setting for rule-based argumentation and its (semi-) abstract account, along with diverse types of uncertainty. Our framework provides a systematic treatment of various kinds of uncertainty and of their relationships and allows us to back or question assertions from the literature

    ProCLAIM: an argument-based model for deliberating over safety critical actions

    Get PDF
    In this Thesis we present an argument-based model – ProCLAIM – intended to provide a setting for heterogeneous agents to deliberate on whether a proposed action is safe. That is, whether or not a proposed action is expected to cause some undesirable side effect that will justify not to undertake the proposed action. This is particularly relevant in safetycritical environments where the consequences ensuing from an inappropriate action may be catastrophic. For the practical realisation of the deliberations the model features a mediator agent with three main tasks: 1) guide the participating agents in what their valid argumentation moves are at each stage of the deliberation; 2) decide whether submitted arguments should be accepted on the basis of their relevance; and finally, 3) evaluate the accepted arguments in order to provide an assessment on whether the proposed action should or should not be undertaken, where the argument evaluation is based on domain consented knowledge (e.g guidelines and regulations), evidence and the decision makers’ expertise. To motivate ProCLAIM’s practical value and generality the model is applied in two scenarios: human organ transplantation and industrial wastewater. In the former scenario, ProCLAIM is used to facilitate the deliberation between two medical doctors on whether an available organ for transplantation is or is not suitable for a particular potential recipient (i.e. whether it is safe to transplant the organ). In the later scenario, a number of agents deliberate on whether an industrial discharge is environmentally safe.En esta tesis se presenta un modelo basado en la Argumentación –ProCLAIM– cuyo n es proporcionar un entorno para la deliberación sobre acciones críticas para la seguridad entre agentes heterogéneos. En particular, el propósito de la deliberación es decidir si los efectos secundario indeseables de una acción justi can no llevarla a cabo. Esto es particularmente relevante en entornos críticos para la seguridad, donde las consecuencias que se derivan de una acción inadecuada puede ser catastró cas. Para la realización práctica de las deliberaciones propuestas, el modelo cuenta con un agente mediador con tres tareas principales: 1) guiar a los agentes participantes indicando cuales son las líneas argumentación válidas en cada etapa de la deliberación; 2) decidir si los argumentos presentados deben ser aceptadas sobre la base de su relevancia y, por último, 3) evaluar los argumentos aceptados con el n de proporcionar una valoración sobre la seguridad de la acción propuesta. Esta valoración se basa en guías y regulaciones del dominio de aplicación, en evidencia y en la opinión de los expertos responsables de la decisión. Para motivar el valor práctico y la generalidad de ProCLAIM, este modelo se aplica en dos escenarios distintos: el trasplante de órganos y la gestión de aguas residuales. En el primer escenario el modelo se utiliza para facilitar la deliberación entre dos médicos sobre la viabilidad del transplante de un órgano para un receptor potencial (es decir, si el transplante es seguro). En el segundo escenario varios agentes deliberan sobre si los efectos de un vertido industrial con el propósito de minimizar su impacto medioambiental

    Towards a framework for computational persuasion with applications in behaviour change

    Get PDF
    Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them

    Historical overview of formal argumentation

    Get PDF

    Deception

    Get PDF

    Resilience, reliability, and coordination in autonomous multi-agent systems

    Get PDF
    Acknowledgements The research reported in this paper was funded and supported by various grants over the years: Robotics and AI in Nuclear (RAIN) Hub (EP/R026084/1); Future AI and Robotics for Space (FAIR-SPACE) Hub (EP/R026092/1); Offshore Robotics for Certification of Assets (ORCA) Hub (EP/R026173/1); the Royal Academy of Engineering under the Chair in Emerging Technologies scheme; Trustworthy Autonomous Systems “Verifiability Node” (EP/V026801); Scrutable Autonomous Systems (EP/J012084/1); Supporting Security Policy with Effective Digital Intervention (EP/P011829/1); The International Technology Alliance in Network and Information Sciences.Peer reviewedPostprin
    corecore