908 research outputs found

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    A model to support collective reasoning: Formalization, analysis and computational assessment

    Full text link
    Inspired by e-participation systems, in this paper we propose a new model to represent human debates and methods to obtain collective conclusions from them. This model overcomes drawbacks of existing approaches by allowing users to introduce new pieces of information into the discussion, to relate them to existing pieces, and also to express their opinion on the pieces proposed by other users. In addition, our model does not assume that users' opinions are rational in order to extract information from it, an assumption that significantly limits current approaches. Instead, we define a weaker notion of rationality that characterises coherent opinions, and we consider different scenarios based on the coherence of individual opinions and the level of consensus that users have on the debate structure. Considering these two factors, we analyse the outcomes of different opinion aggregation functions that compute a collective decision based on the individual opinions and the debate structure. In particular, we demonstrate that aggregated opinions can be coherent even if there is a lack of consensus and individual opinions are not coherent. We conclude our analysis with a computational evaluation demonstrating that collective opinions can be computed efficiently for real-sized debates

    An Empirical Evaluation of the Inferential Capacity of Defeasible Argumentation, Non-monotonic Fuzzy Reasoning and Expert Systems

    Get PDF
    Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning activity under uncertainty. An experimental work was performed by exploiting three deductive knowledge bases produced with the aid of experts in the field. These were coded into models by employing the selected techniques and were subsequently elicited with data gathered from humans. The inferences produced by these models were in turn analysed according to common metrics of evaluation in the field of mental workload, in specific validity and sensitivity. Findings suggest that the variance of the inferences of expert systems and fuzzy reasoning models was higher, highlighting poor stability. Contrarily, that of argument-based models was lower, showing a superior stability of its inferences across knowledge bases and under different system configurations. The originality of this research lies in the quantification of the impact of defeasible argumentation. It contributes to the field of logic and non-monotonic reasoning by situating defeasible argumentation among similar approaches of non-monotonic reasoning under uncertainty through a novel empirical comparison

    A Cooperative Approach for Composite Ontology Matching

    Get PDF
    Ontologies have proven to be an essential element in a range of applications in which knowl-edge plays a key role. Resolving the semantic heterogeneity problem is crucial to allow the interoperability between ontology-based systems. This makes automatic ontology matching, as an anticipated solution to semantic heterogeneity, an important, research issue. Many dif-ferent approaches to the matching problem have emerged from the literature. An important issue of ontology matching is to find effective ways of choosing among many techniques and their variations, and then combining their results. An innovative and promising option is to formalize the combination of matching techniques using agent-based approaches, such as cooperative negotiation and argumentation. In this thesis, the formalization of the on-tology matching problem following an agent-based approach is proposed. Such proposal is evaluated using state-of-the-art data sets. The results show that the consensus obtained by negotiation and argumentation represent intermediary values which are closer to the best matcher. As the best matcher may vary depending on specific differences of multiple data sets, cooperative approaches are an advantage. *** RESUMO - Ontologias são elementos essenciais em sistemas baseados em conhecimento. Resolver o problema de heterogeneidade semântica é fundamental para permitira interoperabilidade entre sistemas baseados em ontologias. Mapeamento automático de ontologias pode ser visto como uma solução para esse problema. Diferentes e complementares abordagens para o problema são propostas na literatura. Um aspecto importante em mapeamento consiste em selecionar o conjunto adequado de abordagens e suas variações, e então combinar seus resultados. Uma opção promissora envolve formalizara combinação de técnicas de ma-peamento usando abordagens baseadas em agentes cooperativos, tais como negociação e argumentação. Nesta tese, a formalização do problema de combinação de técnicas de ma-peamento usando tais abordagens é proposta e avaliada. A avaliação, que envolve conjuntos de testes sugeridos pela comunidade científica, permite concluir que o consenso obtido pela negociação e pela argumentação não é exatamente a melhoria de todos os resultados individuais, mas representa os valores intermediários que são próximo da melhor técnica. Considerando que a melhor técnica pode variar dependendo de diferencas específicas de múltiplas bases de dados, abordagens cooperativas são uma vantagem

    Comparing and Extending the Use of Defeasible Argumentation with Quantitative Data in Real-World Contexts

    Get PDF
    Dealing with uncertain, contradicting, and ambiguous information is still a central issue in Artificial Intelligence (AI). As a result, many formalisms have been proposed or adapted so as to consider non-monotonicity. A non-monotonic formalism is one that allows the retraction of previous conclusions or claims, from premises, in light of new evidence, offering some desirable flexibility when dealing with uncertainty. Among possible options, knowledge-base, non-monotonic reasoning approaches have seen their use being increased in practice. Nonetheless, only a limited number of works and researchers have performed any sort of comparison among them. This research article focuses on evaluating the inferential capacity of defeasible argumentation, a formalism particularly envisioned for modelling non-monotonic reasoning. In addition to this, fuzzy reasoning and expert systems, extended for handling non-monotonicity of reasoning, are selected and employed as baselines, due to their vast and accepted use within the AI community. Computational trust was selected as the domain of application of such models. Trust is an ill-defined construct, hence, reasoning applied to the inference of trust can be seen as non-monotonic. Inference models were designed to assign trust scalars to editors of the Wikipedia project. Scalars assigned to recognised trustworthy editors provided the basis for the analysis of the models’ inferential capacity according to evaluation metrics from the domain of computational trust. In particular, argument-based models demonstrated more robustness than those built upon the baselines despite the knowledge bases or datasets employed. This study contributes to the body of knowledge through the exploitation of defeasible argumentation and its comparison to similar approaches. It provides publicly implementations for the designed models of inference, which might be a useful aid to scholars interested in performing non-monotonic reasoning activities. It adds to previous works, empirically enhancing the generalisability of defeasible argumentation as a compelling approach to reason with quantitative data and uncertain knowledge

    Computational Persuasion using Chatbots based on Crowdsourced Argument Graphs & Concerns

    Get PDF
    As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. Conversational agents, also known as chatbots, are versatile tools that have the potential of being used as agents in dialogical argumentation systems where the chatbot acts as the persuader and the human agent as the persuadee and thereby offer a costeffective and scalable alternative to in-person consultations To allow the user to type his or her argument in free-text input (as opposed to selecting arguments from a menu) the chatbot needs to be able to (1) “understand” the user’s concern he or she is raising in their argument and (2) give an appropriate counterargument that addresses the user’s concern. In this thesis I describe how to (1) acquire arguments for the construction of the chatbot’s knowledge base with the help of crowdsourcing, (2) how to automatically identify the concerns that arguments address, and (3) how to construct the chatbot’s knowledge base in the form of an argument graph that can be used during persuasive dialogues with users. I evaluated my methods in four case studies that covered several domains (physical activity, meat consumption, UK University Fees and COVID-19 vaccination). In each case study I implemented a chatbot that engaged in argumentative dialogues with participants and measured the participants’ change of stance before and after engaging in a chat with the bot. In all four case studies the chatbot showed statistically significant success persuading people to either consider changing their behaviour or to change their stance

    Improving Practical Reasoning and Argumentation

    Get PDF
    This thesis justifies the need for and develops a new integrated model of practical reasoning and argumentation. After framing the work in terms of what is reasonable rather than what is rational (chapter 1), I apply the model for practical argumentation analysis and evaluation provided by Fairclough and Fairclough (2012) to a paradigm case of unreasonable individual practical argumentation provided by mass murderer Anders Behring Breivik (chapter 2). The application shows that by following the model, Breivik is relatively easily able to conclude that his reasoning to mass murder is reasonable – which is understood to be an unacceptable result. Causes for the model to allow such a conclusion are identified as conceptual confusions ingrained in the model, a tension in how values function within the model, and a lack of creativity from Breivik. Distinguishing between dialectical and dialogical, reasoning and argumentation, for individual and multiple participants, chapter 3 addresses these conceptual confusions and helps lay the foundation for the design of a new integrated model for practical reasoning and argumentation (chapter 4). After laying out the theoretical aspects of the new model, it is then used to re-test Breivik’s reasoning in light of a developed discussion regarding the motivation for the new place and role of moral considerations (chapter 5). The application of the new model shows ways that Breivik could have been able to conclude that his practical argumentation was unreasonable and is thus argued to have improved upon the Fairclough and Fairclough model. It is acknowledged, however, that since the model cannot guarantee a reasonable conclusion, improving the critical creative capacity of the individual using it is also of paramount importance (chapter 6). The thesis concludes by discussing the contemporary importance of improving practical reasoning and by pointing to areas for further research (chapter 7)
    corecore