19,059 research outputs found

    Sensitivity of Machine Learning Approaches to Fake and Untrusted Data in Healthcare Domain

    Get PDF
    Machine Learning models are susceptible to attacks, such as noise, privacy invasion, replay, false data injection, and evasion attacks, which affect their reliability and trustworthiness. Evasion attacks performed to probe and identify potential ML-trained models’ vulnerabilities, and poisoning attacks, performed to obtain skewed models whose behavior could be driven when specific inputs are submitted, represent a severe and open issue to face in order to assure security and reliability to critical domains and systems that rely on ML-based or other AI solutions, such as healthcare and justice, for example. In this study, we aimed to perform a comprehensive analysis of the sensitivity of Artificial Intelligence approaches to corrupted data in order to evaluate their reliability and resilience. These systems need to be able to understand what is wrong, figure out how to overcome the resulting problems, and then leverage what they have learned to overcome those challenges and improve their robustness. The main research goal pursued was the evaluation of the sensitivity and responsiveness of Artificial Intelligence algorithms to poisoned signals by comparing several models solicited with both trusted and corrupted data. A case study from the healthcare domain was provided to support the pursued analyses. The results achieved with the experimental campaign were evaluated in terms of accuracy, specificity, sensitivity, F1-score, and ROC area

    Dehumanization, Disability, and Eugenics

    Get PDF
    This paper explores the relationship between eugenics, disability, and dehumanization, with a focus on forms of eugenics beyond Nazi eugenics

    Machine Learning and Knowledge: Why Robustness Matters

    Full text link
    Trusting machine learning algorithms requires having confidence in their outputs. Confidence is typically interpreted in terms of model reliability, where a model is reliable if it produces a high proportion of correct outputs. However, model reliability does not address concerns about the robustness of machine learning models, such as models relying on the wrong features or variations in performance based on context. I argue that the epistemic dimension of trust can instead be understood through the concept of knowledge, where the trustworthiness of an algorithm depends on whether its users are in the position to know that its outputs are correct. Knowledge requires beliefs to be formed for the right reasons and to be robust to error, so machine learning algorithms can only provide knowledge if they work well across counterfactual scenarios and if they make decisions based on the right features. This, I argue, can explain why we should care about model properties like interpretability, causal shortcut independence, and distribution shift robustness even if such properties are not required for model reliability.Comment: Comments are welcom

    Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline

    Full text link
    From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20 Pages, 1 Figur

    When and How to Fool Explainable Models (and Humans) with Adversarial Examples

    Full text link
    Reliable deployment of machine learning models such as neural networks continues to be challenging due to several limitations. Some of the main shortcomings are the lack of interpretability and the lack of robustness against adversarial examples or out-of-distribution inputs. In this paper, we explore the possibilities and limits of adversarial attacks for explainable machine learning models. First, we extend the notion of adversarial examples to fit in explainable machine learning scenarios, in which the inputs, the output classifications and the explanations of the model's decisions are assessed by humans. Next, we propose a comprehensive framework to study whether (and how) adversarial examples can be generated for explainable models under human assessment, introducing novel attack paradigms. In particular, our framework considers a wide range of relevant (yet often ignored) factors such as the type of problem, the user expertise or the objective of the explanations in order to identify the attack strategies that should be adopted in each scenario to successfully deceive the model (and the human). These contributions intend to serve as a basis for a more rigorous and realistic study of adversarial examples in the field of explainable machine learning.Comment: 12 pages, 1 figur

    Diagnostic error increases mortality and length of hospital stay in patients presenting through the emergency room

    Get PDF
    Background: Diagnostic errors occur frequently, especially in the emergency room. Estimates about the consequences of diagnostic error vary widely and little is known about the factors predicting error. Our objectives thus was to determine the rate of discrepancy between diagnoses at hospital admission and discharge in patients presenting through the emergency room, the discrepancies’ consequences, and factors predicting them. Methods: Prospective observational clinical study combined with a survey in a University-affiliated tertiary care hospital. Patients’ hospital discharge diagnosis was compared with the diagnosis at hospital admittance through the emergency room and classified as similar or discrepant according to a predefined scheme by two independent expert raters. Generalized linear mixed-effects models were used to estimate the effect of diagnostic discrepancy on mortality and length of hospital stay and to determine whether characteristics of patients, diagnosing physicians, and context predicted diagnostic discrepancy. Results: 755 consecutive patients (322 [42.7%] female; mean age 65.14 years) were included. The discharge diagnosis differed substantially from the admittance diagnosis in 12.3% of cases. Diagnostic discrepancy was associated with a longer hospital stay (mean 10.29 vs. 6.90 days; Cohen’s d 0.47; 95% confidence interval 0.26 to 0.70; P = 0.002) and increased patient mortality (8 (8.60%) vs. 25(3.78%); OR 2.40; 95% CI 1.05 to 5.5 P = 0.038). A factor available at admittance that predicted diagnostic discrepancy was the diagnosing physician’s assessment that the patient presented atypically for the diagnosis assigned (OR 3.04; 95% CI 1.33–6.96; P = 0.009). Conclusions: Diagnostic discrepancies are a relevant healthcare problem in patients admitted through the emergency room because they occur in every ninth patient and are associated with increased in-hospital mortality. Discrepancies are not readily predictable by fixed patient or physician characteristics; attention should focus on context

    Domain-independent exception handling services that increase robustness in open multi-agent systems

    Get PDF
    Title from cover. "May 2000."Includes bibliographical references (p. 17-23).Mark Klein and Chrysanthos Dellarocas
    • 

    corecore