25 research outputs found

    A Comparative Study of Defeasible Argumentation and Non-monotonic Fuzzy Reasoning for Elderly Survival Prediction Using Biomarkers

    Get PDF
    Computational argumentation has been gaining momentum as a solid theoretical research discipline for inference under uncertainty with incomplete and contradicting knowledge. However, its practical counterpart is underdeveloped, with a lack of studies focused on the investigation of its impact in real-world settings and with real knowledge. In this study, computational argumentation is compared against non-monotonic fuzzy reasoning and evaluated in the domain of biological markers for the prediction of mortality in an elderly population. Different non-monotonic argument-based models and fuzzy reasoning models have been designed using an extensive knowledge base gathered from an expert in the field. An analysis of the true positive and false positive rate of the inferences of such models has been performed. Findings indicate a superior inferential capacity of the designed argument-based models

    A Qualitative Investigation of the Degree of Explainability of Defeasible Argumentation and Non-monotonic Fuzzy Reasoning

    Get PDF
    Defeasible argumentation has advanced as a solid theoretical research discipline for inference under uncertainty. Scholars have predominantly focused on the construction of argument-based models for demonstrating non-monotonic reasoning adopting the notions of arguments and conflicts. However, they have marginally attempted to examine the degree of explainability that this approach can offer to explain inferences to humans in real-world applications. Model explanations are extremely important in areas such as medical diagnosis because they can increase human trustworthiness towards automatic inferences. In this research, the inferential processes of defeasible argumentation and non-monotonic fuzzy reasoning are meticulously described, exploited and qualitatively compared. A number of properties have been selected for such a comparison including understandability, simulatability, algorithmic transparency, post-hoc interpretability, computational complexity and extensibility. Findings show how defeasible argumentation can lead to the construction of inferential non-monotonic models with a higher degree of explainability compared to those built with fuzzy reasoning

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    ArgFrame: A Multi-Layer, Web, Argument-Based Framework for Quantitative Reasoning

    Get PDF
    Multiple systems have been proposed to perform computational argumentation activities, but there is a lack of options for dealing with quantitative inferences. This multi-layer, web, argument-based framework has been proposed as a tool to perform automated reasoning with numerical data. It is able to use boolean logic for the creation of if-then rules and attacking rules. In turn, these rules/arguments can be activated or not by some input data, have their attacks solved (following some Dung or rank-based semantics), and finally aggregated in different fashions in order to produce a prediction (a number). The framework is implemented in PHP for the back-end. A JavaScript interface is provided for creating arguments, attacks among arguments, and performing case-by-case analyses

    Exploring the potential of defeasible argumentation for quantitative inferences in real-world contexts: An assessment of computational trust

    Get PDF
    Argumentation has recently shown appealing properties for inference under uncertainty and conflicting knowledge. However, there is a lack of studies focused on the examination of its capacity of exploiting real-world knowledge bases for performing quantitative, case-by-case inferences. This study performs an analysis of the inferential capacity of a set of argument-based models, designed by a human reasoner, for the problem of trust assessment. Precisely, these models are exploited using data from Wikipedia, and are aimed at inferring the trustworthiness of its editors. A comparison against non-deductive approaches revealed that these models were superior according to values inferred to recognised trustworthy editors. This research contributes to the field of argumentation by employing a replicable modular design which is suitable for modelling reasoning under uncertainty applied to distinct real-world domains

    Inferential Models of Mental Workload with Defeasible Argumentation and Non-monotonic Fuzzy Reasoning: a Comparative Study

    Get PDF
    Inferences through knowledge driven approaches have been researched extensively in the field of Artificial Intelligence. Among such approaches argumentation theory has recently shown appealing properties for inference under uncertainty and conflicting evidence. Nonetheless, there is a lack of studies which examine its inferential capacity over other quantitative theories of reasoning under uncertainty with real-world knowledge-bases. This study is focused on a comparison between argumentation theory and non-monotonic fuzzy reasoning when applied to modeling the construct of human mental workload (MWL). Different argument-based and non-monotonic fuzzy reasoning models, aimed at inferring the MWL imposed by a selection of learning tasks, in a third-level context, have been designed. These models are built upon knowledge-bases that contain uncertain and conflicting evidence provided by human experts. An analysis of the convergent and face validity of such models has been performed. Results suggest a superior inferential capacity of argument-based models over fuzzy reasoning-based models

    Comparing and Extending the Use of Defeasible Argumentation with Quantitative Data in Real-World Contexts

    Get PDF
    Dealing with uncertain, contradicting, and ambiguous information is still a central issue in Artificial Intelligence (AI). As a result, many formalisms have been proposed or adapted so as to consider non-monotonicity. A non-monotonic formalism is one that allows the retraction of previous conclusions or claims, from premises, in light of new evidence, offering some desirable flexibility when dealing with uncertainty. Among possible options, knowledge-base, non-monotonic reasoning approaches have seen their use being increased in practice. Nonetheless, only a limited number of works and researchers have performed any sort of comparison among them. This research article focuses on evaluating the inferential capacity of defeasible argumentation, a formalism particularly envisioned for modelling non-monotonic reasoning. In addition to this, fuzzy reasoning and expert systems, extended for handling non-monotonicity of reasoning, are selected and employed as baselines, due to their vast and accepted use within the AI community. Computational trust was selected as the domain of application of such models. Trust is an ill-defined construct, hence, reasoning applied to the inference of trust can be seen as non-monotonic. Inference models were designed to assign trust scalars to editors of the Wikipedia project. Scalars assigned to recognised trustworthy editors provided the basis for the analysis of the models’ inferential capacity according to evaluation metrics from the domain of computational trust. In particular, argument-based models demonstrated more robustness than those built upon the baselines despite the knowledge bases or datasets employed. This study contributes to the body of knowledge through the exploitation of defeasible argumentation and its comparison to similar approaches. It provides publicly implementations for the designed models of inference, which might be a useful aid to scholars interested in performing non-monotonic reasoning activities. It adds to previous works, empirically enhancing the generalisability of defeasible argumentation as a compelling approach to reason with quantitative data and uncertain knowledge

    Comparing Defeasible Argumentation and Non-Monotonic Fuzzy Reasoning Methods for a Computational Trust Problem with Wikipedia

    Get PDF
    Computational trust is an ever-more present issue with the surge in autonomous agent development. Represented as a defeasible phenomenon, problems associated with computational trust may be solved by the appropriate reasoning methods. This paper compares two types of such methods, Defeasible Argumentation and Non-Monotonic Fuzzy Logic to assess which is more effective at solving a computational trust problem centred around Wikipedia editors. Through the application of these methods with real-data and a set of knowledge-bases, it was found that the Fuzzy Logic approach was statistically significantly better than the Argumentation approach in its inferential capacity

    An Empirical Evaluation of the Inferential Capacity of Defeasible Argumentation, Non-monotonic Fuzzy Reasoning and Expert Systems

    Get PDF
    Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning activity under uncertainty. An experimental work was performed by exploiting three deductive knowledge bases produced with the aid of experts in the field. These were coded into models by employing the selected techniques and were subsequently elicited with data gathered from humans. The inferences produced by these models were in turn analysed according to common metrics of evaluation in the field of mental workload, in specific validity and sensitivity. Findings suggest that the variance of the inferences of expert systems and fuzzy reasoning models was higher, highlighting poor stability. Contrarily, that of argument-based models was lower, showing a superior stability of its inferences across knowledge bases and under different system configurations. The originality of this research lies in the quantification of the impact of defeasible argumentation. It contributes to the field of logic and non-monotonic reasoning by situating defeasible argumentation among similar approaches of non-monotonic reasoning under uncertainty through a novel empirical comparison

    Examining the Modelling Capabilities of Defeasible Argumentation and non-Monotonic Fuzzy Reasoning

    Get PDF
    Knowledge-representation and reasoning methods have been extensively researched within Artificial Intelligence. Among these, argumentation has emerged as an ideal paradigm for inference under uncertainty with conflicting knowledge. Its value has been predominantly demonstrated via analyses of the topological structure of graphs of arguments and its formal properties. However, limited research exists on the examination and comparison of its inferential capacity in real-world modelling tasks and against other knowledge-representation and non-monotonic reasoning methods. This study is focused on a novel comparison between defeasible argumentation and non-monotonic fuzzy reasoning when applied to the representation of the ill-defined construct of human mental workload and its assessment. Different argument-based and non-monotonic fuzzy reasoning models have been designed considering knowledge-bases of incremental complexity containing uncertain and conflicting information provided by a human reasoner. Findings showed how their inferences have a moderate convergent and face validity when compared respectively to those of an existing baseline instrument for mental workload assessment, and to a perception of mental workload self-reported by human participants. This confirmed how these models also reasonably represent the construct under consideration. Furthermore, argument-based models had on average a lower mean squared error against the self-reported perception of mental workload when compared to fuzzy-reasoning models and the baseline instrument. The contribution of this research is to provide scholars, interested in formalisms on knowledge-representation and non-monotonic reasoning, with a novel approach for empirically comparing their inferential capacity
    corecore