11 research outputs found

    Explaining Inferences in Bayesian Networks

    Get PDF
    While Bayesian network (BN) can achieve accurate predictions even with erroneous or incomplete evidence, explaining the inferences remains a challenge. Existing approaches fall short because they do not exploit variable interactions and cannot account for com-pensations during inferences. This paper proposes the Explaining BN Inferences (EBI) procedure for explaining how variables interact to reach conclusions. EBI explains the value of a target node in terms of the influential nodes in the target’s Markov blanket under specific contexts, where the Markov nodes include the target’s parents, children, and the children’s other parents. Working back from the target node, EBI shows the deriva-tion of each intermediate variable, and finally explains how missing and erroneous evidence values are compensated. We validated EBI on a variety of problem domains, including mushroom classification, water purification and web page recommendation. The experi-ments show that EBI generates high quality, concise and comprehensible explanations for BN inferences, in particular the underlying compensation mechanism that enables BN to outperform alternative prediction systems, such as decision tree. 1

    User-centered Development of a Clinical Decision Support System

    Get PDF
    Scientific progress is offering increasingly better ways to tailor a patient’s treatment to the patient’s needs, i.e., better support for optimal clinical decision-making can be offered. Choosing the appropriate treatment for a patient depends on numerous factors, including pathology results, tumor stage, genetic, and molecular characteristics. Bayesian networks are a type of probabilistic artificial intelligence, which in principle would be suitable to support complex clinical decision-making. However, most clinicians do not have experience with these networks. This paper describes an approach of developing a clinical decision support system based on Bayesian networks, that does not require insight knowledge about the underlying computational model for its use. It is developed as a therapy-oriented approach with a focus on usability and explainability. The approach features the computation and presentation of individualized treatment recommendations, comparison of treatments and patient cases, as well as explanations and visualizations providing additional information on the current patient case

    Quantitative analysis of breast cancer diagnosis using a probabilistic modelling approach

    Get PDF
    Background:Breast cancer is the most prevalent cancer in women in most countries of the world. Many computer aided diagnostic methods have been proposed, but there are few studies on quantitative discovery of probabilistic dependencies among breast cancer data features and identification of the contribution of each feature to breast cancer diagnosis. Methods:This study aims to fill this void by utilizing a Bayesian network (BN) modelling approach. A K2 learning algorithm and statistical computation methods are used to construct BN structure and assess the obtained BN model. The data used in this study were collected from a clinical ultrasound dataset derived from a Chinese local hospital and a fine-needle aspiration cytology (FNAC) dataset from UCI machine learning repository. Results: Our study suggested that, in terms of ultrasound data, cell shape is the most significant feature for breast cancer diagnosis, and the resistance index presents a strong probabilistic dependency on blood signals. With respect to FNAC data, bare nuclei are the most important discriminating feature of malignant and benign breast tumours, and uniformity of both cell size and cell shape are tightly interdependent. Contributions: The BN modelling approach can support clinicians in making diagnostic decisions based on the significant features identified by the model, especially when some other features are missing for specific patients. The approach is also applicable to other healthcare data analytics and data modelling for disease diagnosis

    Classification of Explainable Artificial Intelligence Methods through Their Output Formats

    Get PDF
    Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulation

    Explaining inference on a population of independent agents using Bayesian networks

    Get PDF
    The main goal of this research is to design, implement, and evaluate a novel explanation method, the hierarchical explanation method (HEM), for explaining Bayesian network (BN) inference when the network is modeling a population of conditionally independent agents, each of which is modeled as a subnetwork. For example, consider disease-outbreak detection in which the agents are patients who are modeled as independent, conditioned on the factors that cause disease spread. Given evidence about these patients, such as their symptoms, suppose that the BN system infers that a respiratory anthrax outbreak is highly likely. A public-health official who received such a report would generally want to know why anthrax is being given a high posterior probability. The HEM explains such inferences. The explanation approach is applicable in general to inference on BNs that model conditionally independent agents; it complements previous approaches for explaining inference on BNs that model a single agent (e.g., for explaining the diagnostic inference for a single patient using a BN that models just that patient). The hypotheses that were tested are: (1) the proposed explanation method provides information that helps a user to understand how and why the inference results have been obtained, (2) the proposed explanation method helps to improve the quality of the inferences that users draw from evidence

    ICE-B 2010:proceedings of the International Conference on e-Business

    Get PDF
    The International Conference on e-Business, ICE-B 2010, aims at bringing together researchers and practitioners who are interested in e-Business technology and its current applications. The mentioned technology relates not only to more low-level technological issues, such as technology platforms and web services, but also to some higher-level issues, such as context awareness and enterprise models, and also the peculiarities of different possible applications of such technology. These are all areas of theoretical and practical importance within the broad scope of e-Business, whose growing importance can be seen from the increasing interest of the IT research community. The areas of the current conference are: (i) e-Business applications; (ii) Enterprise engineering; (iii) Mobility; (iv) Business collaboration and e-Services; (v) Technology platforms. Contributions vary from research-driven to being more practical oriented, reflecting innovative results in the mentioned areas. ICE-B 2010 received 66 submissions, of which 9% were accepted as full papers. Additionally, 27% were presented as short papers and 17% as posters. All papers presented at the conference venue were included in the SciTePress Digital Library. Revised best papers are published by Springer-Verlag in a CCIS Series book
    corecore