2,568 research outputs found
Measurement of the top quark pair production cross section in proton-antiproton collisions at Vs=1.96 TeV : hadronic top decays with the d0 detector
Of the six quarks in the standard model the top quark is by far the heaviest: 35 times more massive than its partner the bottom quark and more than 130 times heavier than the average of the other five quarks. Its correspondingly small decay width means it tends to decay before forming a bound state. Of all quarks, therefore, the top is the least affected by quark confinement, behaving almost as a free quark. Since in the standard model top quarks couple almost exclusively to bottom quarks (t ! Wb), top quark decays provide a window on the standard model through the direct measurement of the Cabibbo-Kobayashi-Maskawa quark mixing matrix element Vtb. In the same way any lack of top quark decays into W bosons could imply the existence of decay channels beyond the standard model, for example charged Higgs bosons as expected in two-doublet Higgs models: t ! H+b. This thesis sets out to measure the top-antitop quark pair production cross section at a center-of-mass energy of ps = 1:96 TeV in the fully hadronic decay channel. The analysis is performed on 1 fb1 of Tevatron Run IIa data taken with the D0 detector between July 2002 and February 2006. A neural network is used to identify jets from b-quarks and a likelihood ratio method is used to separate signal from background. To avoid reliance on, possibly imperfect, Monte Carlo models for the modelling of the QCD background,\ud
the background was modelled using a dedicated data sample. The tt signal was modelled using the alpgen and pythia Monte Carlo event generators. \ud
The generated signal sample was passed through the full, geant based, D0\ud
detector simulation and reconstructed using the default D0 reconstruction\ud
software.\u
Multimodal Machine Learning for 30-Days Post-Operative Mortality Prediction of Elderly Hip Fracture Patients
Interpreting and Correcting Medical Image Classification with PIP-Net
Part-prototype models are explainable-by-design image classifiers, and a
promising alternative to black box AI. This paper explores the applicability
and potential of interpretable machine learning, in particular PIP-Net, for
automated diagnosis support on real-world medical imaging data. PIP-Net learns
human-understandable prototypical image parts and we evaluate its accuracy and
interpretability for fracture detection and skin cancer diagnosis. We find that
PIP-Net's decision making process is in line with medical classification
standards, while only provided with image-level class labels. Because of
PIP-Net's unsupervised pretraining of prototypes, data quality problems such as
undesired text in an X-ray or labelling errors can be easily identified.
Additionally, we are the first to show that humans can manually correct the
reasoning of PIP-Net by directly disabling undesired prototypes. We conclude
that part-prototype models are promising for medical applications due to their
interpretability and potential for advanced model debugging
Interpreting and Correcting Medical Image Classification with PIP-Net
Part-prototype models are explainable-by-design image classifiers, and a promising alternative to black box AI. This paper explores the applicability and potential of interpretable machine learning, in particular PIP-Net, for automated diagnosis support on real-world medical imaging data. PIP-Net learns human-understandable prototypical image parts and we evaluate its accuracy and interpretability for fracture detection and skin cancer diagnosis. We find that PIP-Netâs decision making process is in line with medical classification standards, while only provided with image-level class labels. Because of PIP-Netâs unsupervised pretraining of prototypes, data quality problems such as undesired text in an X-ray or labelling errors can be easily identified. Additionally, we are the first to show that humans can manually correct the reasoning of PIP-Net by directly disabling undesired prototypes. We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging.</p
Interpreting and Correcting Medical Image Classification with PIP-Net
Part-prototype models are explainable-by-design image classifiers, and a promising alternative to black box AI. This paper explores the applicability and potential of interpretable machine learning, in particular PIP-Net, for automated diagnosis support on real-world medical imaging data. PIP-Net learns human-understandable prototypical image parts and we evaluate its accuracy and interpretability for fracture detection and skin cancer diagnosis. We find that PIP-Net's decision making process is in line with medical classification standards, while only provided with image-level class labels. Because of PIP-Net's unsupervised pretraining of prototypes, data quality problems such as undesired text in an X-ray or labelling errors can be easily identified. Additionally, we are the first to show that humans can manually correct the reasoning of PIP-Net by directly disabling undesired prototypes. We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging
Feature Importance to Explain Multimodal Prediction Models:a Clinical Use Case
Surgery to treat elderly hip fracture patients may cause complications that can lead to early mortality. An early warning system for complications could provoke clinicians to monitor high-risk patients more carefully and address potential complications early, or inform the patient. In this work, we develop a multimodal deep-learning model for post-operative mortality prediction using pre-operative and per-operative data from elderly hip fracture patients. Specifically, we include static patient data, hip and chest images before surgery in pre-operative data, vital signals, and medications administered during surgery in per-operative data. We extract features from image modalities using ResNet and from vital signals using LSTM. Explainable model outcomes are essential for clinical applicability, therefore we compute Shapley values to explain the predictions of our multimodal black box model. We find that i) Shapley values can be used to estimate the relative contribution of each modality both locally and globally, and ii) a modified version of the chain rule can be used to propagate Shapley values through a sequence of models supporting interpretable local explanations. Our findings imply that a multimodal combination of black box models can be explained by propagating Shapley values through the model sequence
Feature importance to explain multimodal prediction models. A clinical use case
Surgery to treat elderly hip fracture patients may cause complications that can lead to early mortality. An early warning system for complications could provoke clinicians to monitor high-risk patients more carefully and address potential complications early, or inform the patient. In this work, we develop a multimodal deep-learning model for post-operative mortality prediction using pre-operative and per-operative data from elderly hip fracture patients. Specifically, we include static patient data, hip and chest images before surgery in pre-operative data, vital signals, and medications administered during surgery in per-operative data. We extract features from image modalities using ResNet and from vital signals using LSTM. Explainable model outcomes are essential for clinical applicability, therefore we compute Shapley values to explain the predictions of our multimodal black box model. We find that i) Shapley values can be used to estimate the relative contribution of each modality both locally and globally, and ii) a modified version of the chain rule can be used to propagate Shapley values through a sequence of models supporting interpretable local explanations. Our findings imply that a multimodal combination of black box models can be explained by propagating Shapley values through the model sequence
Radiology report generation for proximal femur fractures using deep classification and language generation models
- âŠ