129 research outputs found
Constrained Probabilistic Mask Learning for Task-specific Undersampled MRI Reconstruction
Undersampling is a common method in Magnetic Resonance Imaging (MRI) to
subsample the number of data points in k-space, reducing acquisition times at
the cost of decreased image quality. A popular approach is to employ
undersampling patterns following various strategies, e.g., variable density
sampling or radial trajectories. In this work, we propose a method that
directly learns the undersampling masks from data points, thereby also
providing task- and domain-specific patterns. To solve the resulting discrete
optimization problem, we propose a general optimization routine called ProM: A
fully probabilistic, differentiable, versatile, and model-free framework for
mask optimization that enforces acceleration factors through a convex
constraint. Analyzing knee, brain, and cardiac MRI datasets with our method, we
discover that different anatomic regions reveal distinct optimal undersampling
masks, demonstrating the benefits of using custom masks, tailored for a
downstream task. For example, ProM can create undersampling masks that maximize
performance in downstream tasks like segmentation with networks trained on
fully-sampled MRIs. Even with extreme acceleration factors, ProM yields
reasonable performance while being more versatile than existing methods, paving
the way for data-driven all-purpose mask generation.Comment: accepted at WACV 202
MITK-ModelFit: A generic open-source framework for model fits and their exploration in medical imaging -- design, implementation and application on the example of DCE-MRI
Many medical imaging techniques utilize fitting approaches for quantitative
parameter estimation and analysis. Common examples are pharmacokinetic modeling
in DCE MRI/CT, ADC calculations and IVIM modeling in diffusion-weighted MRI and
Z-spectra analysis in chemical exchange saturation transfer MRI. Most available
software tools are limited to a special purpose and do not allow for own
developments and extensions. Furthermore, they are mostly designed as
stand-alone solutions using external frameworks and thus cannot be easily
incorporated natively in the analysis workflow. We present a framework for
medical image fitting tasks that is included in MITK, following a rigorous
open-source, well-integrated and operating system independent policy. Software
engineering-wise, the local models, the fitting infrastructure and the results
representation are abstracted and thus can be easily adapted to any model
fitting task on image data, independent of image modality or model. Several
ready-to-use libraries for model fitting and use-cases, including fit
evaluation and visualization, were implemented. Their embedding into MITK
allows for easy data loading, pre- and post-processing and thus a natural
inclusion of model fitting into an overarching workflow. As an example, we
present a comprehensive set of plug-ins for the analysis of DCE MRI data, which
we validated on existing and novel digital phantoms, yielding competitive
deviations between fit and ground truth. Providing a very flexible environment,
our software mainly addresses developers of medical imaging software that
includes model fitting algorithms and tools. Additionally, the framework is of
high interest to users in the domain of perfusion MRI, as it offers
feature-rich, freely available, validated tools to perform pharmacokinetic
analysis on DCE MRI data, with both interactive and automatized batch
processing workflows.Comment: 31 pages, 11 figures URL: http://mitk.org/wiki/MITK-ModelFi
Unreading Race: Purging Protected Features from Chest X-ray Embeddings
Purpose: To analyze and remove protected feature effects in chest radiograph
embeddings of deep learning models.
Materials and Methods: An orthogonalization is utilized to remove the
influence of protected features (e.g., age, sex, race) in chest radiograph
embeddings, ensuring feature-independent results. To validate the efficacy of
the approach, we retrospectively study the MIMIC and CheXpert datasets using
three pre-trained models, namely a supervised contrastive, a self-supervised
contrastive, and a baseline classifier model. Our statistical analysis involves
comparing the original versus the orthogonalized embeddings by estimating
protected feature influences and evaluating the ability to predict race, age,
or sex using the two types of embeddings.
Results: Our experiments reveal a significant influence of protected features
on predictions of pathologies. Applying orthogonalization removes these feature
effects. Apart from removing any influence on pathology classification, while
maintaining competitive predictive performance, orthogonalized embeddings
further make it infeasible to directly predict protected attributes and
mitigate subgroup disparities.
Conclusion: The presented work demonstrates the successful application and
evaluation of the orthogonalization technique in the domain of chest X-ray
classification
A knee cannot have lung disease: out-of-distribution detection with in-distribution voting using the medical example of chest X-ray classification
Deep learning models are being applied to more and more use cases with
astonishing success stories, but how do they perform in the real world? To test
a model, a specific cleaned data set is assembled. However, when deployed in
the real world, the model will face unexpected, out-of-distribution (OOD) data.
In this work, we show that the so-called "radiologist-level" CheXnet model
fails to recognize all OOD images and classifies them as having lung disease.
To address this issue, we propose in-distribution voting, a novel method to
classify out-of-distribution images for multi-label classification. Using
independent class-wise in-distribution (ID) predictors trained on ID and OOD
data we achieve, on average, 99 % ID classification specificity and 98 %
sensitivity, improving the end-to-end performance significantly compared to
previous works on the chest X-ray 14 data set. Our method surpasses other
output-based OOD detectors even when trained solely with ImageNet as OOD data
and tested with X-ray OOD images.Comment: Code available at
https://gitlab.lrz.de/IP/a-knee-cannot-have-lung-diseas
Exploring the Impact of Image Resolution on Chest X-ray Classification Performance
Deep learning models for image classification have often used a resolution of
pixels for computational reasons.
This study investigates the effect of image resolution on chest X-ray
classification performance, using the ChestX-ray14 dataset.
The results show that a higher image resolution, specifically
pixels, has the best overall classification performance, with
a slight decline in performance between to pixels
for most of the pathological classes.
Comparison of saliency map-generated bounding boxes revealed that commonly
used resolutions are insufficient for finding most pathologies
WindowNet: Learnable Windows for Chest X-ray Classification
Chest X-ray (CXR) images are commonly compressed to a lower resolution and
bit depth to reduce their size, potentially altering subtle diagnostic
features.
Radiologists use windowing operations to enhance image contrast, but the
impact of such operations on CXR classification performance is unclear.
In this study, we show that windowing can improve CXR classification
performance, and propose WindowNet, a model that learns optimal window
settings.
We first investigate the impact of bit-depth on classification performance
and find that a higher bit-depth (12-bit) leads to improved performance.
We then evaluate different windowing settings and show that training with a
distinct window generally improves pathology-wise classification performance.
Finally, we propose and evaluate WindowNet, a model that learns optimal
window settings, and show that it significantly improves performance compared
to the baseline model without windowing
Bayesian pharmacokinetic modeling of dynamic contrast-enhanced magnetic resonance imaging: validation and application
Tracer-kinetic analysis of dynamic contrast-enhanced magnetic resonance imaging data is commonly performed with the well-known Tofts model and nonlinear least squares (NLLS) regression. This approach yields point estimates of model parameters, uncertainty of these estimates can be assessed e.g. by an additional bootstrapping analysis. Here, we present a Bayesian probabilistic modeling approach for tracer-kinetic analysis with a Tofts model, which yields posterior probability distributions of perfusion parameters and therefore promises a robust and information-enriched alternative based on a framework of probability distributions. In this manuscript, we use the quantitative imaging biomarkers alliance (QIBA) Tofts phantom to evaluate the Bayesian tofts model (BTM) against a bootstrapped NLLS approach. Furthermore, we demonstrate how Bayesian posterior probability distributions can be employed to assess treatment response in a breast cancer DCE-MRI dataset using Cohen's d. Accuracy and precision of the BTM posterior distributions were validated and found to be in good agreement with the NLLS approaches, and assessment of therapy response with respect to uncertainty in parameter estimates was found to be excellent. In conclusion, the Bayesian modeling approach provides an elegant means to determine uncertainty via posterior distributions within a single step and provides honest information about changes in parameter estimates
Radiation dose and image quality of high-pitch emergency abdominal CT in obese patients using third-generation dual-source CT (DSCT)
In this third-generation dual-source CT (DSCT) study, we retrospectively investigated radiation dose and image quality of portal-venous high-pitch emergency CT in 60 patients (28 female, mean age 56 years) with a body mass index (BMI) (3) 30 kg/m(2). Patients were dichotomized in groups A (median BMI 31.5 kg/m(2);n = 33) and B (36.8 kg/m(2);n = 27). Volumetric CT dose index (CTDIvol), size-specific dose estimate (SSDE), dose length product (DLP) and effective dose (ED) were assessed. Contrast-to-noise ratio (CNR) and dose-independent figure-of-merit (FOM) CNR were calculated. Subjective image quality was assessed using a five-point scale. Mean values of CTDIvol, SSDE as well as normalized DLP and ED were 7.6 +/- 1.8 mGy, 8.0 +/- 1.8 mGy, 304 +/- 74 mGy * cm and 5.2 +/- 1.3 mSv for group A, and 12.6 +/- 3.7 mGy, 11.0 +/- 2.6 mGy, 521 +/- 157 mGy * cm and 8.9 +/- 2.7 mSv for group B (p 36.8 kg/m(2)
- …