138 research outputs found

    A Neutrosophic Clinical Decision-Making System for Cardiovascular Diseases Risk Analysis

    Get PDF
    Cardiovascular diseases are the leading cause of death worldwide. Early diagnosis of heart disease can reduce this large number of deaths so that treatment can be carried out. Many decision-making systems have been developed, but they are too complex for medical professionals. To target these objectives, we develop an explainable neutrosophic clinical decision-making system for the timely diagnose of cardiovascular disease risk. We make our system transparent and easy to understand with the help of explainable artificial intelligence techniques so that medical professionals can easily adopt this system. Our system is taking thirtyfive symptoms as input parameters, which are, gender, age, genetic disposition, smoking, blood pressure, cholesterol, diabetes, body mass index, depression, unhealthy diet, metabolic disorder, physical inactivity, pre-eclampsia, rheumatoid arthritis, coffee consumption, pregnancy, rubella, drugs, tobacco, alcohol, heart defect, previous surgery/injury, thyroid, sleep apnea, atrial fibrillation, heart history, infection, homocysteine level, pericardial cysts, marfan syndrome, syphilis, inflammation, clots, cancer, and electrolyte imbalance and finds out the risk of coronary artery disease, cardiomyopathy, congenital heart disease, heart attack, heart arrhythmia, peripheral artery disease, aortic disease, pericardial disease, deep vein thrombosis, heart valve disease, and heart failure. There are five main modules of the system, which are neutrosophication, knowledge base, inference engine, de-neutrosophication, and explainability. To demonstrate the complete working of our system, we design an algorithm and calculates its time complexity. We also present a new de-neutrosophication formula, and give comparison of our the results with existing methods

    Comparison of nomogram with machine learning techniques for prediction of overall survival in patients with tongue cancer

    Get PDF
    Background: The prediction of overall survival in tongue cancer is important for planning of personalized care and patient counselling. Objectives: This study compares the performance of a nomogram with a machine learning model to predict overall survival in tongue cancer. The nomogram and machine learning model were built using a large data set from the Surveillance, Epidemiology, and End Results (SEER) program database. The comparison is necessary to provide the clinicians with a comprehensive, practical, and most accurate assistive system to predict overall survival of this patient population. Methods: The data set used included the records of 7596 tongue cancer patients. The considered machine learning algorithms were logistic regression, support vector machine, Bayes point machine, boosted decision tree, decision forest, and decision jungle. These algorithms were mainly evaluated in terms of the areas under the receiver operating characteristic (ROC) curve (AUC) and accuracy values. The performance of the algorithm that produced the best result was compared with a nomogram to predict overall survival in tongue cancer patients. Results: The boosted decision-tree algorithm outperformed other algorithms. When compared with a nomogram using external validation data, the boosted decision tree produced an accuracy of 88.7% while the nomogram showed an accuracy of 60.4%. In addition, it was found that age of patient, T stage, radiotherapy, and the surgical resection were the most prominent features with significant influence on the machine learning model's performance to predict overall survival. Conclusion: The machine learning model provides more personalized and reliable prognostic information of tongue cancer than the nomogram. However, the level of transparency offered by the nomogram in estimating patients' outcomes seems more confident and strengthened the principle of shared decision making between the patient and clinician. Therefore, a combination of a nomogram - machine learning (NomoML) predictive model may help to improve care, provides information to patients, and facilitates the clinicians in making tongue cancer management-related decisions.Peer reviewe

    Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification

    Get PDF
    Although Deep Neural Networks (DNNs) have great generalization and prediction capabilities, their functioning does not allow a detailed explanation of their behavior. Opaque deep learning models are increasingly used to make important predictions in critical environments, and the danger is that they make and use predictions that cannot be justified or legitimized. Several eXplainable Artificial Intelligence (XAI) methods that separate explanations from machine learning models have emerged, but have shortcomings in faithfulness to the model actual functioning and robustness. As a result, there is a widespread agreement on the importance of endowing Deep Learning models with explanatory capabilities so that they can themselves provide an answer to why a particular prediction was made. First, we address the problem of the lack of universal criteria for XAI by formalizing what an explanation is. We also introduced a set of axioms and definitions to clarify XAI from a mathematical perspective. Finally, we present the Greybox XAI, a framework that composes a DNN and a transparent model thanks to the use of a symbolic Knowledge Base (KB). We extract a KB from the dataset and use it to train a transparent model (i.e., a logistic regression). An encoder-decoder architecture is trained on RGB images to produce an output similar to the KB used by the transparent model. Once the two models are trained independently, they are used compositionally to form an explainable predictive model. We show how this new architecture is accurate and explainable in several datasets.French ANRT (AssociationNationale Recherche Technologie - ANRT)SEGULA TechnologiesJuan de la Cierva Incorporacion grant - MCIN/AEI by "ESF Investing in your future" I JC2019-039152-IGoogle Research Scholar ProgramDepartment of Education of the Basque Government (Consolidated Research Group MATHMODE) IT1456-2

    Explainable temporal data mining techniques to support the prediction task in Medicine

    Get PDF
    In the last decades, the increasing amount of data available in all fields raises the necessity to discover new knowledge and explain the hidden information found. On one hand, the rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, results to users. In the biomedical informatics and computer science communities, there is considerable discussion about the `` un-explainable" nature of artificial intelligence, where often algorithms and systems leave users, and even developers, in the dark with respect to how results were obtained. Especially in the biomedical context, the necessity to explain an artificial intelligence system result is legitimate of the importance of patient safety. On the other hand, current database systems enable us to store huge quantities of data. Their analysis through data mining techniques provides the possibility to extract relevant knowledge and useful hidden information. Relationships and patterns within these data could provide new medical knowledge. The analysis of such healthcare/medical data collections could greatly help to observe the health conditions of the population and extract useful information that can be exploited in the assessment of healthcare/medical processes. Particularly, the prediction of medical events is essential for preventing disease, understanding disease mechanisms, and increasing patient quality of care. In this context, an important aspect is to verify whether the database content supports the capability of predicting future events. In this thesis, we start addressing the problem of explainability, discussing some of the most significant challenges need to be addressed with scientific and engineering rigor in a variety of biomedical domains. We analyze the ``temporal component" of explainability, focusing on detailing different perspectives such as: the use of temporal data, the temporal task, the temporal reasoning, and the dynamics of explainability in respect to the user perspective and to knowledge. Starting from this panorama, we focus our attention on two different temporal data mining techniques. The first one, based on trend abstractions, starting from the concept of Trend-Event Pattern and moving through the concept of prediction, we propose a new kind of predictive temporal patterns, namely Predictive Trend-Event Patterns (PTE-Ps). The framework aims to combine complex temporal features to extract a compact and non-redundant predictive set of patterns composed by such temporal features. The second one, based on functional dependencies, we propose a methodology for deriving a new kind of approximate temporal functional dependencies, called Approximate Predictive Functional Dependencies (APFDs), based on a three-window framework. We then discuss the concept of approximation, the data complexity of deriving an APFD, the introduction of two new error measures, and finally the quality of APFDs in terms of coverage and reliability. Exploiting these methodologies, we analyze intensive care unit data from the MIMIC dataset

    Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine

    Full text link
    Artificial intelligence (AI) continues to transform data analysis in many domains. Progress in each domain is driven by a growing body of annotated data, increased computational resources, and technological innovations. In medicine, the sensitivity of the data, the complexity of the tasks, the potentially high stakes, and a requirement of accountability give rise to a particular set of challenges. In this review, we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making. (1) Explainable AI aims to produce a human-interpretable justification for each output. Such models increase confidence if the results appear plausible and match the clinicians expectations. However, the absence of a plausible explanation does not imply an inaccurate model. Especially in highly non-linear, complex models that are tuned to maximize accuracy, such interpretable representations only reflect a small portion of the justification. (2) Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains. For example, a classification task based on images acquired on different acquisition hardware. (3) Federated learning enables learning large-scale models without exposing sensitive personal health information. Unlike centralized AI learning, where the centralized learning machine has access to the entire training data, the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates, not personal health data. This narrative review covers the basic concepts, highlights relevant corner-stone and state-of-the-art research in the field, and discusses perspectives.Comment: This paper is accepted in IEEE CAA Journal of Automatica Sinica, Nov. 10 202

    Solving the Explainable AI Conundrum: How to Bridge the Gap Between Clinicians Needs and Developers Goals

    Full text link
    Explainable AI (XAI) is considered the number one solution for overcoming implementation hurdles of AI/ML in clinical practice. However, it is still unclear how clinicians and developers interpret XAI (differently) and whether building such systems is achievable or even desirable. This longitudinal multi-method study queries (n=112) clinicians and developers as they co-developed the DCIP – an ML-based prediction system for Delayed Cerebral Ischemia. The resulting framework reveals that ambidexterity between exploration and exploitation can help bridge opposing goals and requirements to improve the design and implementation of AI/ML in healthcare
    corecore