1,419 research outputs found
Toward Transparent Sequence Models with Model-Based Tree Markov Model
In this study, we address the interpretability issue in complex, black-box
Machine Learning models applied to sequence data. We introduce the Model-Based
tree Hidden Semi-Markov Model (MOB-HSMM), an inherently interpretable model
aimed at detecting high mortality risk events and discovering hidden patterns
associated with the mortality risk in Intensive Care Units (ICU). This model
leverages knowledge distilled from Deep Neural Networks (DNN) to enhance
predictive performance while offering clear explanations. Our experimental
results indicate the improved performance of Model-Based trees (MOB trees) via
employing LSTM for learning sequential patterns, which are then transferred to
MOB trees. Integrating MOB trees with the Hidden Semi-Markov Model (HSMM) in
the MOB-HSMM enables uncovering potential and explainable sequences using
available information
Recommended from our members
Predicting the Effectiveness of Medical Interventions
This dissertation explores several conceptual and methodological features of medical science that influence our ability to accurately predict medical effectiveness. Making reliable predictions about the effectiveness of medical treatments is crucial to mitigating death and disease and improving individual and population health, yet generating such predictions is fraught with difficulties. Each chapter deals with a unique challenge to predictions of medical effectiveness.
In Chapter 1, I describe and analyze the principles underlying three prominent approaches to physical disease classification—the etiological, symptom-based, and pathophysiological models—and suggest a broadly pragmatic approach whereby appropriate classifications depend on the goal in question. In line with this, I argue that particular features of the pathophysiological model, such as its focus on disease mechanisms, make it most relevant for predicting medical effectiveness.
Chapter 2 explores the debate between those who argue that statistical evidence is sufficient for inferring medical effectiveness and those who argue that we require both statistical and mechanistic evidence. I focus on the question of how mechanistic and statistical evidence can be integrated. I highlight some of the challenges facing formal techniques, such as Bayesian networks, and use Toulmin’s model of argumentation to offer a complementary model of evidence amalgamation, which allows for the systematic integration of statistical and mechanistic evidence.
In Chapter 3, I focus on p-hacking, an application of analytic techniques that may lead to exaggerated experimental results. I use philosophical tools from decision theory to illustrate how severe the effects of p-hacking can be. While it is typically considered epistemically questionable and practically harmful, I appeal to the argument from inductive risk to defend the view that there are some contexts in which p-hacking may be warranted.
Chapter 4 draws attention to a particular set of biases plaguing medical research: Meta-biases. I argue that biases of this type, such as publication bias and sponsorship bias, lead to exaggerated clinical trial results. I then offer a framework, the bias dynamics model, that corrects for the influence of meta-biases on estimations of medical effectiveness.
In Chapter 5, I argue against the prominent view that AI models are not explainable by showing how four familiar accounts of scientific explanation can be applied to neural networks. The confusion about explaining AI models is due to the conflation of ‘explainability’, ‘understandability’, and ‘interpretability’. To remedy this, I offer a novel account of AI-interpretability, according to which an interpretation is something one does to an explanation with the explicit aim of producing another, more understandable, explanation.The Oppenheimer Memorial Trust
Department of History and Philosophy of Science, Cambridge Universit
Is Machine Learning Unsafe and Irresponsible in Social Sciences? Paradoxes and Reconsidering from Recidivism Prediction Tasks
The paper addresses some fundamental and hotly debated issues for high-stakes
event predictions underpinning the computational approach to social sciences.
We question several prevalent views against machine learning and outline a new
paradigm that highlights the promises and promotes the infusion of
computational methods and conventional social science approaches
Secure and robust machine learning for healthcare: A survey
Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from one-dimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks. In this paper, we present an overview of various application areas in healthcare that leverage such techniques from security and privacy point of view and present associated challenges. In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications. Finally, we provide insight into the current research challenges and promising directions for future research
- …