268 research outputs found

    A distance measure of interval-valued belief structures

    Get PDF
    Interval-valued belief structures are generalized from belief function theory, in terms of basic belief assignments from crisp to interval numbers. The distance measure has long been an essential tool in belief function theory, such as conflict evidence combinations, clustering analysis, belief function and approximation. Researchers have paid much attention and proposed many kinds of distance measures. However, few works have addressed distance measures of interval-valued belief structures up. In this paper, we propose a method to measure the distance of interval belief functions. The method is based on an interval-valued one-dimensional Hausdorff distance and Jaccard similarity coefficient. We show and prove its properties of non-negativity, non-degeneracy, symmetry and triangle inequality. Numerical examples illustrate the validity of the proposed distance

    An online belief rule-based group clinical decision support system

    Get PDF
    Around ten percent of patients admitted to National Health Service (NHS) hospitals have experienced a patient safety incident, and an important reason for the high rate of patient safety incidents is medical errors. Research shows that appropriate increase in the use of clinical decision support systems (CDSSs) could help to reduce medical errors and result in substantial improvement in patient safety. However several barriers continue to impede the effective implementation of CDSSs in clinical settings, among which representation of and reasoning about medical knowledge particularly under uncertainty are areas that require refined methodologies and techniques. Particularly, the knowledge base in a CDSS needs to be updated automatically based on accumulated clinical cases to provide evidence-based clinical decision support. In the research, we employed the recently developed belief Rule-base Inference Methodology using the Evidential Reasoning approach (RIMER) for design and development of an online belief rule-based group CDSS prototype. In the system, belief rule base (BRB) was used to model uncertain clinical domain knowledge, the evidential reasoning (ER) approach was employed to build inference engine, a BRB training module was developed for learning the BRB through accumulated clinical cases, and an online discussion forum together with an ER-based group preferences aggregation tool were developed for providing online clinical group decision support.We used a set of simulated patients in cardiac chest pain provided by our research collaborators in Manchester Royal Infirmary to validate the developed online belief rule-based CDSS prototype. The results show that the prototype can provide reliable diagnosis recommendations and the diagnostic performance of the system can be improved significantly after training BRB using accumulated clinical cases.EThOS - Electronic Theses Online ServiceManchester Business SchoolGBUnited Kingdo

    Second CLIPS Conference Proceedings, volume 1

    Get PDF
    Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems

    Biomedical applications of belief networks

    Get PDF
    Biomedicine is an area in which computers have long been expected to play a significant role. Although many of the early claims have proved unrealistic, computers are gradually becoming accepted in the biomedical, clinical and research environment. Within these application areas, expert systems appear to have met with the most resistance, especially when applied to image interpretation.In order to improve the acceptance of computerised decision support systems it is necessary to provide the information needed to make rational judgements concerning the inferences the system has made. This entails an explanation of what inferences were made, how the inferences were made and how the results of the inference are to be interpreted. Furthermore there must be a consistent approach to the combining of information from low level computational processes through to high level expert analyses.nformation from low level computational processes through to high level expert analyses. Until recently ad hoc formalisms were seen as the only tractable approach to reasoning under uncertainty. A review of some of these formalisms suggests that they are less than ideal for the purposes of decision making. Belief networks provide a tractable way of utilising probability theory as an inference formalism by combining the theoretical consistency of probability for inference and decision making, with the ability to use the knowledge of domain experts.nowledge of domain experts. The potential of belief networks in biomedical applications has already been recog¬ nised and there has been substantial research into the use of belief networks for medical diagnosis and methods for handling large, interconnected networks. In this thesis the use of belief networks is extended to include detailed image model matching to show how, in principle, feature measurement can be undertaken in a fully probabilistic way. The belief networks employed are usually cyclic and have strong influences between adjacent nodes, so new techniques for probabilistic updating based on a model of the matching process have been developed.An object-orientated inference shell called FLAPNet has been implemented and used to apply the belief network formalism to two application domains. The first application is model-based matching in fetal ultrasound images. The imaging modality and biological variation in the subject make model matching a highly uncertain process. A dynamic, deformable model, similar to active contour models, is used. A belief network combines constraints derived from local evidence in the image, with global constraints derived from trained models, to control the iterative refinement of an initial model cue.In the second application a belief network is used for the incremental aggregation of evidence occurring during the classification of objects on a cervical smear slide as part of an automated pre-screening system. A belief network provides both an explicit domain model and a mechanism for the incremental aggregation of evidence, two attributes important in pre-screening systems.Overall it is argued that belief networks combine the necessary quantitative features required of a decision support system with desirable qualitative features that will lead to improved acceptability of expert systems in the biomedical domain

    Bayesian probability encoding in medical decision analysis

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Rethinking Causality, Complexity and Evidence for the Unique Patient

    Get PDF
    This open access book is a unique resource for health professionals who are interested in understanding the philosophical foundations of their daily practice. It provides tools for untangling the motivations and rationality behind the way medicine and healthcare is studied, evaluated and practiced. In particular, it illustrates the impact that thinking about causation, complexity and evidence has on the clinical encounter. The book shows how medicine is grounded in philosophical assumptions that could at least be challenged. By engaging with ideas that have shaped the medical profession, clinicians are empowered to actively take part in setting the premises for their own practice and knowledge development. Written in an engaging and accessible style, with contributions from experienced clinicians, this book presents a new philosophical framework that takes causal complexity, individual variation and medical uniqueness as default expectations for health and illness

    Rationality, pragmatics, and sources

    Get PDF
    This thesis contributes to the Great Rationality Debate in cognitive science. It introduces and explores a triangular scheme for understanding the relationship between rationality and two key abilities: pragmatics – roughly, inferring implicit intended utterance meanings – and learning from sources. The thesis argues that these three components – rationality, pragmatics, and sources – should be considered together: that each one informs the others. The thesis makes this case through literature review and theoretical work (principally, in Chapters 1 and 8) and through a series of empirical chapters focusing on different parts of the triangular scheme. Chapters 2 to 4 address the relationship between pragmatics and sources, focusing on how people change their beliefs when they read a conditional with a partially reliable source. The data bear on theories of the conditional and on the literature assessing people’s rationality with conditionals. Chapter 5 addresses the relationship between rationality and pragmatics, focusing on conditionals ‘in action’ in a framing effect known as goal framing. The data suggest a complex relationship between pragmatics and utilities, and support a new approach to goal framing. Chapter 6 addresses the relationship between rationality and sources, using normative Bayesian models to explore how people respond to simple claims from sources of different reliabilities. The data support a two-way relationship between claims and source information and, perhaps most strikingly, suggest that people readily treat sources as ‘anti-reliable’: as negatively correlated with the truth. Chapter 7 extends these experiments to test the theory that speakers can guard against reputational damage using hedging. The data do not support this theory, and raise questions about whether trust and vigilance against deception are prerequisites for pragmatics. Lastly, Chapter 8 synthesizes the results; argues for new ways of understanding the relationships between rationality, pragmatics, and sources; and relates the findings to emerging formal methods in psychology

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison
    corecore