4 research outputs found
A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image Quality
Quality control (QC) of medical images is essential to ensure that downstream analyses such as segmentation can be performed successfully. Currently, QC is predominantly performed visually at significant time and operator cost. We aim to automate the process by formulating a probabilistic network that estimates uncertainty through a heteroscedastic noise model, hence providing a proxy measure of task-specific image quality that is learnt directly from the data. By augmenting the training data with different types of simulated k-space artefacts, we propose a novel cascading CNN architecture based on a student-teacher framework to decouple sources of uncertainty related to different k-space augmentations in an entirely self-supervised manner. This enables us to predict separate uncertainty quantities for the different types of data degradation. While the uncertainty measures reflect the presence and severity of image artefacts, the network also provides the segmentation predictions given the quality of the data. We show models trained with simulated artefacts provide informative measures of uncertainty on real-world images and we validate our uncertainty predictions on problematic images identified by human-raters
A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image Quality
Quality control (QC) of medical images is essential to ensure that downstream
analyses such as segmentation can be performed successfully. Currently, QC is
predominantly performed visually at significant time and operator cost. We aim
to automate the process by formulating a probabilistic network that estimates
uncertainty through a heteroscedastic noise model, hence providing a proxy
measure of task-specific image quality that is learnt directly from the data.
By augmenting the training data with different types of simulated k-space
artefacts, we propose a novel cascading CNN architecture based on a
student-teacher framework to decouple sources of uncertainty related to
different k-space augmentations in an entirely self-supervised manner. This
enables us to predict separate uncertainty quantities for the different types
of data degradation. While the uncertainty measures reflect the presence and
severity of image artefacts, the network also provides the segmentation
predictions given the quality of the data. We show models trained with
simulated artefacts provide informative measures of uncertainty on real-world
images and we validate our uncertainty predictions on problematic images
identified by human-raters
Simulation of Brain Resection for Cavity Segmentation Using Self-Supervised and Semi-Supervised Learning
Resective surgery may be curative for drug-resistant focal epilepsy, but only
40% to 70% of patients achieve seizure freedom after surgery. Retrospective
quantitative analysis could elucidate patterns in resected structures and
patient outcomes to improve resective surgery. However, the resection cavity
must first be segmented on the postoperative MR image. Convolutional neural
networks (CNNs) are the state-of-the-art image segmentation technique, but
require large amounts of annotated data for training. Annotation of medical
images is a time-consuming process requiring highly-trained raters, and often
suffering from high inter-rater variability. Self-supervised learning can be
used to generate training instances from unlabeled data. We developed an
algorithm to simulate resections on preoperative MR images. We curated a new
dataset, EPISURG, comprising 431 postoperative and 269 preoperative MR images
from 431 patients who underwent resective surgery. In addition to EPISURG, we
used three public datasets comprising 1813 preoperative MR images for training.
We trained a 3D CNN on artificially resected images created on the fly during
training, using images from 1) EPISURG, 2) public datasets and 3) both. To
evaluate trained models, we calculate Dice score (DSC) between model
segmentations and 200 manual annotations performed by three human raters. The
model trained on data with manual annotations obtained a median (interquartile
range) DSC of 65.3 (30.6). The DSC of our best-performing model, trained with
no manual annotations, is 81.7 (14.2). For comparison, inter-rater agreement
between human annotators was 84.0 (9.9). We demonstrate a training method for
CNNs using simulated resection cavities that can accurately segment real
resection cavities, without manual annotations.Comment: 13 pages, 6 figures, accepted at the International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI) 202
TorchIO: a Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning
Processing of medical images such as MRI or CT presents unique challenges
compared to RGB images typically used in computer vision. These include a lack
of labels for large datasets, high computational costs, and metadata to
describe the physical properties of voxels. Data augmentation is used to
artificially increase the size of the training datasets. Training with image
patches decreases the need for computational power. Spatial metadata needs to
be carefully taken into account in order to ensure a correct alignment of
volumes.
We present TorchIO, an open-source Python library to enable efficient
loading, preprocessing, augmentation and patch-based sampling of medical images
for deep learning. TorchIO follows the style of PyTorch and integrates standard
medical image processing libraries to efficiently process images during
training of neural networks. TorchIO transforms can be composed, reproduced,
traced and extended. We provide multiple generic preprocessing and augmentation
operations as well as simulation of MRI-specific artifacts.
Source code, comprehensive tutorials and extensive documentation for TorchIO
can be found at https://github.com/fepegar/torchio. The package can be
installed from the Python Package Index running 'pip install torchio'. It
includes a command-line interface which allows users to apply transforms to
image files without using Python. Additionally, we provide a graphical
interface within a TorchIO extension in 3D Slicer to visualize the effects of
transforms.
TorchIO was developed to help researchers standardize medical image
processing pipelines and allow them to focus on the deep learning experiments.
It encourages open science, as it supports reproducibility and is version
controlled so that the software can be cited precisely. Due to its modularity,
the library is compatible with other frameworks for deep learning with medical
images.Comment: Submitted to Computer Methods and Programs in Biomedicine. 27 pages,
7 figures. Documentation for TorchIO can be found at http://torchio.rtfd.io