106,891 research outputs found
Approximate reasoning using terminological models
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved
Extending uncertainty formalisms to linear constraints and other complex formalisms
Linear constraints occur naturally in many reasoning problems and the information that they represent is often uncertain. There is a difficulty in applying AI uncertainty formalisms to this situation, as their representation of the underlying logic, either as a mutually exclusive and exhaustive set of possibilities, or with a propositional or a predicate logic, is inappropriate (or at least unhelpful). To overcome this difficulty, we express reasoning with linear constraints as a logic, and develop the formalisms based on this different underlying logic. We focus in particular on a possibilistic logic representation of uncertain linear constraints, a lattice-valued possibilistic logic, an assumption-based reasoning formalism and a Dempster-Shafer representation, proving some fundamental results for these extended systems. Our results on extending uncertainty formalisms also apply to a very general class of underlying monotonic logics
AI & Law, logic and argument schemes
This paper reviews the history of AI & Law research from the perspective of argument schemes. It starts with the observation that logic, although very well applicable to legal reasoning when there is uncertainty, vagueness and disagreement, is too abstract to give a fully satisfactory classification of legal argument types. It therefore needs to be supplemented with an argument-scheme approach, which classifies arguments not according to their logical form but according to their content, in particular, according to the roles that the various elements of an argument can play. This approach is then applied to legal reasoning, to identify some of the main legal argument schemes. It is also argued that much AI & Law research in fact employs the argument-scheme approach, although it usually is not presented as such. Finally, it is argued that the argument-scheme approach and the way it has been employed in AI & Law respects some of the main lessons to be learnt from Toulmin's The Uses of Argument
Effects of Logic-Style Explanations and Uncertainty on Users’ Decisions
The spread of innovative Artificial Intelligence (AI) algorithms assists many individuals in their daily life decision-making tasks but also sensitive domains such as disease diagnosis and credit risk. However, a great majority of these algorithms are of a black-box nature, bringing the need to make them more transparent and interpretable along with the establishment of guidelines to help users manage these systems.
The eXplainable Artificial Intelligence (XAI) community investigated numerous factors influencing subjective and objective metrics in the user-AI team, such as the effects of presenting AI-related information and explanations to users. Nevertheless, some factors that influence the effectiveness of explanations are still under-explored in the literature, such as user uncertainty, AI uncertainty, AI correctness, and different explanation styles.
The main goal of this thesis is to investigate the interactions between different aspects of decision-making, focusing in particular on the effects of AI and user uncertainty, AI correctness, and the explanation reasoning style (inductive, abductive, and deductive) on different data types and domains considering classification tasks. We set up three user evaluations on images, text, and time series data to analyse these factors on users' task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements (instance, AI prediction, and explanation).
The results for the image and text data show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.
Instead, the time series data results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI.
The last part of the thesis focuses on the work done with the \enquote{CRS4 - Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna}, for the implementation of the RIALE (Remote Intelligent Access to Lab Experiment) Platform. The work aims to help students explore a DNA-sequences experiment enriched with an AI tagging tool, which detects the objects used in the laboratory and its current phase. Further, the interface includes an interactive timeline which enables students to explore the AI predictions of the video experiment's steps and an XAI panel that provides explanations of the AI decisions - presented with abductive reasoning - on three levels (globally, by phase, and by frame).
We evaluated the interface with students considering the subjective cognitive effort, ease of use, supporting information of the interface, general usability, and an interview on a set of questions on peculiar aspects of the application. The user evaluation results showed that students were positively satisfied with the interface and in favour of following didactic lessons using this tool
Recommended from our members
Explainable AI: The new 42?
Explainable AI is not a new field. Since at least the early exploitation of C.S. Pierce’s abductive reasoning in expert systems of the 1980s, there were reasoning architectures to support an explanation function for complex AI systems, including applications in medical diagnosis, complex multi-component design, and reasoning about the real world. So explainability is at least as old as early AI, and a natural consequence of the design of AI systems. While early expert systems consisted of handcrafted knowledge bases that enabled reasoning over narrowly well-defined domains (e.g., INTERNIST, MYCIN), such systems had no learning capabilities and had only primitive uncertainty handling. But the evolution of formal reasoning architectures to incorporate principled probabilistic reasoning helped address the capture and use of uncertain knowledge.
There has been recent and relatively rapid success of AI/machine learning solutions arises from neural network architectures. A new generation of neural methods now scale to exploit the practical applicability of statistical and algebraic learning approaches in arbitrarily high dimensional spaces. But despite their huge successes, largely in problems which can be cast as classification problems, their effectiveness is still limited by their un-debuggability, and their inability to “explain” their decisions in a human understandable and reconstructable way. So while AlphaGo or DeepStack can crush the best humans at Go or Poker, neither program has any internal model of its task; its representations defy interpretation by humans, there is no mechanism to explain their actions and behaviour, and furthermore, there is no obvious instructional value.. the high performance systems can not help humans improve. Even when we understand the underlying mathematical scaffolding of current machine learning architectures, it is often impossible to get insight into the internal working of the models; we need explicit modeling and reasoning tools to explain how and why a result was achieved. We also know that a significant challenge for future AI is contextual adaptation, i.e., systems that incrementally help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence
- …