224 research outputs found

    Formalising Human Mental Workload as a Defeasible Computational Concept

    Get PDF
    Human mental workload has gained importance, in the last few decades, as a fundamental design concept in human-computer interaction. It can be intuitively defined as the amount of mental work necessary for a person to complete a task over a given period of time. For people interacting with interfaces, computers and technological devices in general, the construct plays an important role. At a low level, while processing information, often people feel annoyed and frustrated; at higher level, mental workload is critical and dangerous as it leads to confusion, it decreases the performance of information processing and it increases the chances of errors and mistakes. It is extensively documented that either mental overload or underload negatively affect performance. Hence, designers and practitioners who are ultimately interested in system or human performance need answers about operator workload at all stages of system design and operation. At an early system design phase, designers require some explicit model to predict the mental workload imposed by their technologies on end-users so that alternative system designs can be evaluated. However, human mental workload is a multifaceted and complex construct mainly applied in cognitive sciences. A plethora of ad-hoc definitions can be found in the literature. Generally, it is not an elementary property, rather it emerges from the interaction between the requirements of a task, the circumstances under which it is performed and the skills, behaviours and perceptions of the operator. Although measuring mental workload has advantages in interaction and interface design, its formalisation as an operational and computational construct has not sufficiently been addressed. Many researchers agree that too many ad-hoc models are present in the literature and that they are applied subjectively by mental workload designers thereby limiting their application in different contexts and making comparison across different models difficult. This thesis introduces a novel computational framework for representing and assessing human mental workload based on defeasible reasoning. The starting point is the investigation of the nature of human mental workload that appears to be a defeasible phenomenon. A defeasible concept is a concept built upon a set of arguments that can be defeated by adding additional arguments. The word ‘defeasible’ is inherited from defeasible reasoning, a form of reasoning built upon reasons that can be defeated. It is also known as non-monotonic reasoning because of the technical property (non-monotonicity) of the logical formalisms that are aimed at modelling defeasible reasoning activity. Here, a conclusion or claim, derived from the application of previous knowledge, can be retracted in the light of new evidence. Formally, state-of-the-art defeasible reasoning models are implemented employing argumentation theory, a multi-disciplinary paradigm that incorporates elements of philosophy, psychology and sociology. It systematically studies how arguments can be built, sustained or discarded in a reasoning process, and it investigates the validity of their conclusions. Since mental workload can be seen as a defeasible phenomenon, formal defeasible argumentation theory may have a positive impact in its representation and assessment. Mental workload can be captured, analysed, and measured in ways that increase its understanding allowing its use for practical activities. The research question investigated here is whether defeasible argumentation theory can enhance the representation of the construct of mental workload and improve the quality of its assessment in the field of human-computer interaction. In order to answer this question, recurrent knowledge and evidence employed in state-of-the-art mental workload measurement techniques have been reviewed in the first place as well as their defeasible and non-monotonic properties. Secondly, an investigation of the state-of-the-art computational techniques for implementing defeasible reasoning has been carried out. This allowed the design of a modular framework for mental workload representation and assessment. The proposed solution has been evaluated by comparing the properties of sensitivity, diagnosticity and validity of the assessments produced by two instances of the framework against the ones produced by two well known subjective mental workload assessments techniques (the Nasa Task Load Index and the Workload Profile) in the context of human-web interaction. In detail, through an empirical user study, it has been firstly demonstrated how these two state-of-the-art techniques can be translated into two particular instances of the framework while still maintaining the same validity. In other words, the indexes of mental workload inferred by the two original instruments, and the ones generated by their corresponding translations (instances of the framework) showed a positive and nearly perfect statistical correlation. Additionally, a new defeasible instance built with the framework showed a better sensitivity and a higher diagnosticity capacity than the two selected state-of-the art techniques. The former showed a higher convergent validity with the latter techniques, but a better concurrent validity with performance measures. The new defeasible instance generated indexes of mental workload that better correlated with the objective time for task completion compared to the two selected instruments. These findings support the research question thereby demonstrating how defeasible argumentation theory can be successfully adopted to support the representation of mental workload and to enhance the quality of its assessments. The main contribution of this thesis is the presentation of a methodology, developed as a formal modular framework, to represent mental workload as a defeasible computational concept and to assess it as a numerical usable index. This research contributes to the body of knowledge by providing a modular framework built upon defeasible reasoning and formalised through argumentation theory in which workload can be optimally measured, analysed, explained and applied in different contexts

    A defeasible reasoning framework for human mental workload representation and assessment

    Get PDF
    Human mental workload (MWL) has gained importance in the last few decades as an important design concept. It is a multifaceted complex construct mainly applied in cognitive sciences and has been defined in many different ways. Although measuring MWL has potential advantages in interaction and interface design, its formalisation as an operational and computational construct has not sufficiently been addressed. This research contributes to the body of knowledge by providing an extensible framework built upon defeasible reasoning, and implemented with argumentation theory (AT), in which MWL can be better defined, measured, analysed, explained and applied in different human–computer interactive contexts. User studies have demonstrated how a particular instance of this framework outperformed state-of-the-art subjective MWL assessment techniques in terms of sensitivity, diagnosticity and validity. This in turn encourages further application of defeasible AT for enhancing the representation of MWL and improving the quality of its assessment

    Examining the Modelling Capabilities of Defeasible Argumentation and non-Monotonic Fuzzy Reasoning

    Get PDF
    Knowledge-representation and reasoning methods have been extensively researched within Artificial Intelligence. Among these, argumentation has emerged as an ideal paradigm for inference under uncertainty with conflicting knowledge. Its value has been predominantly demonstrated via analyses of the topological structure of graphs of arguments and its formal properties. However, limited research exists on the examination and comparison of its inferential capacity in real-world modelling tasks and against other knowledge-representation and non-monotonic reasoning methods. This study is focused on a novel comparison between defeasible argumentation and non-monotonic fuzzy reasoning when applied to the representation of the ill-defined construct of human mental workload and its assessment. Different argument-based and non-monotonic fuzzy reasoning models have been designed considering knowledge-bases of incremental complexity containing uncertain and conflicting information provided by a human reasoner. Findings showed how their inferences have a moderate convergent and face validity when compared respectively to those of an existing baseline instrument for mental workload assessment, and to a perception of mental workload self-reported by human participants. This confirmed how these models also reasonably represent the construct under consideration. Furthermore, argument-based models had on average a lower mean squared error against the self-reported perception of mental workload when compared to fuzzy-reasoning models and the baseline instrument. The contribution of this research is to provide scholars, interested in formalisms on knowledge-representation and non-monotonic reasoning, with a novel approach for empirically comparing their inferential capacity

    Analysing online user activity to implicitly infer the mental workload of web-based tasks using defeasible reasoning

    Get PDF
    Mental workload can be considered the amount of cognitive load or effort used over time to complete a task in a complex system. Determining the limits of mental workload can assist in optimising designs and identify if user performance is affected by that design. Mental workload has also been presented as a defeasible concept, where one reason can defeat another and a 5-layer schema to represent domain knowledge to infer mental workload using defeasible reasoning has compared favourably to state-of-the-art inference techniques. Other previous work investigated using records of user activity for measuring mental workload at scale using web-based tasks For this research, a solution design and experiment were put together to analyse user activity from a web-based task to determine if mental workload can be inferred implicitly using defeasible reasoning. While there was one promising result, only weak correlation between inferred values and reference workload profile values was found

    An Empirical Evaluation of the Inferential Capacity of Defeasible Argumentation, Non-monotonic Fuzzy Reasoning and Expert Systems

    Get PDF
    Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning activity under uncertainty. An experimental work was performed by exploiting three deductive knowledge bases produced with the aid of experts in the field. These were coded into models by employing the selected techniques and were subsequently elicited with data gathered from humans. The inferences produced by these models were in turn analysed according to common metrics of evaluation in the field of mental workload, in specific validity and sensitivity. Findings suggest that the variance of the inferences of expert systems and fuzzy reasoning models was higher, highlighting poor stability. Contrarily, that of argument-based models was lower, showing a superior stability of its inferences across knowledge bases and under different system configurations. The originality of this research lies in the quantification of the impact of defeasible argumentation. It contributes to the field of logic and non-monotonic reasoning by situating defeasible argumentation among similar approaches of non-monotonic reasoning under uncertainty through a novel empirical comparison

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    Inferential Models of Mental Workload with Defeasible Argumentation and Non-monotonic Fuzzy Reasoning: a Comparative Study

    Get PDF
    Inferences through knowledge driven approaches have been researched extensively in the field of Artificial Intelligence. Among such approaches argumentation theory has recently shown appealing properties for inference under uncertainty and conflicting evidence. Nonetheless, there is a lack of studies which examine its inferential capacity over other quantitative theories of reasoning under uncertainty with real-world knowledge-bases. This study is focused on a comparison between argumentation theory and non-monotonic fuzzy reasoning when applied to modeling the construct of human mental workload (MWL). Different argument-based and non-monotonic fuzzy reasoning models, aimed at inferring the MWL imposed by a selection of learning tasks, in a third-level context, have been designed. These models are built upon knowledge-bases that contain uncertain and conflicting evidence provided by human experts. An analysis of the convergent and face validity of such models has been performed. Results suggest a superior inferential capacity of argument-based models over fuzzy reasoning-based models

    Argumentation for Knowledge Representation, Conflict Resolution, Defeasible Inference and Its Integration with Machine Learning

    Get PDF
    Modern machine Learning is devoted to the construction of algorithms and computational procedures that can automatically improve with experience and learn from data. Defeasible argumentation has emerged as sub-topic of artificial intelligence aimed at formalising common-sense qualitative reasoning. The former is an inductive approach for inference while the latter is deductive, each one having advantages and limitations. A great challenge for theoretical and applied research in AI is their integration. The first aim of this chapter is to provide readers informally with the basic notions of defeasible and non-monotonic reasoning. It then describes argumentation theory, a paradigm for implementing defeasible reasoning in practice as well as the common multi-layer schema upon which argument-based systems are usually built. The second aim is to describe a selection of argument-based applications in the medical and health-care sectors, informed by the multi-layer schema. A summary of the features that emerge from the applications under review is aimed at showing why defeasible argumentation is attractive for knowledge-representation, conflict resolution and inference under uncertainty. Open problems and challenges in the field of argumentation are subsequently described followed by a future outlook in which three points of integration with machine learning are proposed

    Formalising Human Mental Workload as Non-Montonic Concept for Adaptive and Personalised Web-Design

    Get PDF
    Web Design has been evolving with Web-based systems becoming more complex and structured due to the delivery of personalised information adapted to end-users. Although information presented can be useful and well formatted, people have little mental workload available for dealing with unusable systems. Subjective mental workload assessments tools are usually adopted to measure the impact of Web-tasks upon end-users thanks to their ease of use and are aimed at supporting design practices. The Nasa Task Load Index subjective procedure has been taken as a reference technique for measuring mental workload, but it has a background in aircraft cockpits, supervisory and process control environments. We argue that the tool is not fully appropriate for dealing with Web-information tasks, characterised by a wide spectrum of contexts of use, cognitive factors and individual user differences such as skill, background, emotional state and motivation. Furthermore, in this model, inputs are averaged without considering their mutual interactions and relations. We propose to see human mental workload as non-monotonic concept and to model it via argumentation theory. The evaluation strategy includes coparisons with the NASA-TLX in terms of statistical correlation, sensitivity, diagnosticity, selectivity and reliability

    An Investigation of Argumentation Theory for the Prediction of Survival in Elderly Using Biomarkers

    Get PDF
    Research on the discovery, classification and validation of biological markers, or biomarkers, have grown extensively in the last decades. Newfound and correctly validated biomarkers have great potential as prognostic and diagnostic indicators, but present a complex relationship with pertinent endpoints such as survival or other diseases manifestations. This research proposes the use of computational argumentation theory as a starting point for the resolution of this problem for cases in which a large amount of data is unavailable. A knowledge-base containing 51 different biomarkers and their association with mortality risks in elderly was provided by a clinician. It was applied for the construction of several argument-based models capable of inferring survival or not. The prediction accuracy and sensitivity of these models were investigated, showing how these are in line with inductive classification using decision trees with limited data
    corecore