8 research outputs found
Deep learning for organ segmentation in radiotherapy : federated learning, contour propagation, and domain adaptation
External radiotherapy treats cancer by pointing a source of radiation(either photons or protons) at a patient who is lying on a couch. Whileit is used in more than half of all cancer patients, this treatment suffersfrom two major shortcomings. First, the target sometimes receives lessradiation dose than prescribed, and healthy organs receive more of it.Although some dose to healthy organs is inevitable (since the beam mustenter the body), part of it is due to poor management of anatomicalvariations during treatment. As a consequence, the tumor can fail to becontrolled (possibly leading to decreased quality of life or even death)and secondary cancers can be induced in the healthy organs. Second, theslowness of treatment planning escalates healthcare costs and reducesdoctors’ face-to-face time with their patients.Coupled with steady improvement in the quality of the medical im-ages used for treatment planning and monitoring, deep learning promisesto offer fast and personalized treatment for all cancer patients sent to ra-diotherapy. Over the past few years, computation capabilities, as well asdigitization and labeling of images, have been increasing rapidly. Deeplearning, a brain-inspired statistical model, now has the potential toidentify targets and healthy organs on medical images with unprece-dented speed and accuracy. This thesis focuses on three aspects: sliceinterpolation, CBCT transfer, and multi-centric data gathering.The treatment planning image (called computed tomography, or CT)is volumetric, i.e., it consists of a stack of slices (2D images) of the pa-tient’s body. The current radiotherapy workflow requires contouring thetarget and healthy organs on all slices manually, a time-consuming pro-cess. While commercial suites propose fully automated contouring withdeep learning, their use for contour propagation remains unexplored. In this thesis, we propose a semi-automated approach to propagate thecontours from one slice to another. The medical doctor, therefore, needsto contour only a few slices of the CT, and those contours are automati-cally propagated to the other slices. This accelerates treatment planning(while maintaining acceptable accuracy) by allowing neural networks topropagate knowledge efficiently.In radiotherapy, the dose is not delivered at once but in several smalldoses calledfractions. The poorly measured anatomical variation be-tween fractions (e.g., due to bladder and rectal filling and voiding) ham-pers dose conformity. This can be mitigated with the Cone Beam CT(CBCT), an image acquired before each fraction which can be considereda low-contrast CT. Today, targets and organs at risk can be identifiedon this image with registration, a model making assumptions about thenature of the anatomical variations between CT and CBCT. However,this method fails when these assumptions are not met (e.g., in the caseof large deformations). In contrast, deep learning makes few assump-tions. Instead, it is a flexible model that is calibrated on large databases.More specifically, it requires annotated CBCTs for training, and thoselabels are time-consuming to produce. Fortunately, large databases ofcontoured CTs exist, since contouring CTs has been part of the workflowfor decades. To leverage such databases we proposecross-domain dataaugmentation, a method for training neural networks to identify targetsand healthy organs on CBCT using many annotated CTs and only a fewannotated CBCTs. Since contouring a few CBCTs may already be chal-lenging for some hospitals, we investigate two other methods –domainadversarial networksandintensity-based data augmentation– that donot require any annotations for the CBCTs. All these methods rely onthe principle of sharing information between the two image modalities(CT and CBCT).Finally, training and validating deep neural networks often requireslarge, multi-centric databases. These are difficult to collect due to tech-nical and legal challenges, as well as inadequate incentives for hospitalsto collaborate. To address these issues, we applyTCLearn, a federatedByzantine agreement framework, to our use-case. This framework isshown to share knowledge between hospitals efficiently.(FSA - Sciences de l'ingénieur) -- UCL, 202
Amélioration de la robustesse de l’U-Net 3D contre la compression JPEG2000 pour la segmentation des organes pelviens masculins.
La segmentation d’organes est un proces- sus essentiel en imagerie médicale pour la planification et le contrôle des traitements. En cas de grandes défor- mations, les algorithmes classiques de segmentation basés sur le traitement d’images et les atlas échouent. Dans de telles situations, l’apprentissage profond permet de fournir de meilleures solutions. Cependant, l’entraînement des ré- seaux d’apprentissage profond nécessite une grande quan- tité d’images. La disponibilité de ces données d’entraîne- ment nécessite un transfert et un stockage de données im- portants pour lesquels la compression des images est obli- gatoire. Cependant, les déformations des données causées par la compression peut influencer les performances de l’apprentissage profond. Dans ce travail, nous proposons d’étudier l’impact de la compression JPEG2000 sur la seg- mentation U-Net 2D et 3D des organes pelviens masculins. Nous montrons que l’utilisation d’un U-Net 3D finement ajusté permettrait de compresser deux fois plus les scans du patient pour la même performance de segmentation par rapport à un U-Net 2D
Improved 3D U-Net robustness against JPEG 2000 compression for male pelvic organ segmentation in radiotherapy
Purpose: Automation of organ segmentation, via convolutional neural networks (CNNs), is key to facilitate the work of medical practitioners by ensuring that the adequate radiation dose is delivered to the target area while avoiding harmful exposure of healthy organs. The issue with CNNs is that they require large amounts of data transfer and storage which makes the use of image compression a necessity. Compression will affect image quality which in turn affects the segmentation process. We address the dilemma involved with handling large amounts of data while preserving segmentation accuracy. Approach: We analyze and improve 2D and 3D U-Net robustness against JPEG 2000 compression for male pelvic organ segmentation. We conduct three experiments on 56 cone beam computed tomography (CT) and 74 CT scans targeting bladder and rectum segmentation. The two objectives of the experiments are to compare the compression robustness of 2D versus 3D U-Net and to improve the 3D U-Net compression tolerance via fine-tuning. Results: We show that a 3D U-Net is 50% more robust to compression than a 2D U-Net. Moreover, by fine-tuning the 3D U-Net, we can double its compression tolerance compared to a 2D U-Net. Furthermore, we determine that fine-tuning the network to a compression ratio of 64:1 will ensure its flexibility to be used at compression ratios equal or lower. Conclusions: We reduce the potential risk involved with using image compression on automated organ segmentation. We demonstrate that a 3D U-Net can be fine-tuned to handle high compression ratios while preserving segmentation accuracy
Cross-domain data augmentation for deep-learning-based male pelvic organ segmentation in cone beam CT
For prostate cancer patients, large organ deformations occurring between radiotherapy treatment sessions create uncertainty about the doses delivered to the tumor and surrounding healthy organs. Segmenting those regions on cone beam CT (CBCT) scans acquired on treatment day would reduce such uncertainties. In this work,a 3D U-net deep-learning architecture was trained to segment the bladder, rectum, and prostate on CBCT scans. Due to the scarcity of contoured CBCT scans, the training set was augmented with CT scans already contoured in the current clinical workflow. Our network was then tested on 63 CBCT scans. The Dice similarity coefficient (DSC) increases significantly with the number of CBCT and CT scans in the training set,reaching 0.874, 0.814, and 0.758 for the bladder, rectum, and prostate respectively. This is about 10\% better than conventional approaches based on deformable image registration between planning CT and treatment CBCT scans, except for the prostate. Interestingly, adding 74 CT scans to the CBCT training set allowed to maintain high DSCs, while halving the number of CBCT scans. Hence, our work shows that although CBCT scans include artifacts, cross-domain augmentation of the training set is effective and can rely on large datasets available for planning CT scans
Cross-Domain Data Augmentation for Deep-Learning-Based Male Pelvic Organ Segmentation in Cone Beam CT
For prostate cancer patients, large organ deformations occurring between radiotherapy treatment sessions create uncertainty about the doses delivered to the tumor and surrounding healthy organs. Segmenting those regions on cone beam CT (CBCT) scans acquired on treatment day would reduce such uncertainties. In this work, a 3D U-net deep-learning architecture was trained to segment bladder, rectum, and prostate on CBCT scans. Due to the scarcity of contoured CBCT scans, the training set was augmented with CT scans already contoured in the current clinical workflow. Our network was then tested on 63 CBCT scans. The Dice similarity coefficient (DSC) increased significantly with the number of CBCT and CT scans in the training set, reaching 0.874 ± 0.096 , 0.814 ± 0.055 , and 0.758 ± 0.101 for bladder, rectum, and prostate, respectively. This was about 10% better than conventional approaches based on deformable image registration between planning CT and treatment CBCT scans, except for prostate. Interestingly, adding 74 CT scans to the CBCT training set allowed maintaining high DSCs, while halving the number of CBCT scans. Hence, our work showed that although CBCT scans included artifacts, cross-domain augmentation of the training set was effective and could rely on large datasets available for planning CT scans
Domain adversarial networks and intensity-based data augmentation for male pelvic organ segmentation in cone beam CT
In radiation therapy, a CT image is used to manually delineate the organs and plan the treatment. During the treatment, a cone beam CT (CBCT) is often acquired to monitor the anatomical modifications. For this purpose, automatic organ segmentation on CBCT is a crucial step. However, manual segmentations on CBCT are scarce, and models trained with CT data do not generalize well to CBCT images. We investigate adversarial networks and intensity-based data augmentation, two strategies leveraging large databases of annotated CTs to train neural networks for segmentation on CBCT. Adversarial networks consist of a 3D U-Net segmenter and a domain classifier. The proposed framework is aimed at encouraging the learning of filters producing more accurate segmentations on CBCT. Intensity-based data augmentation consists in modifying the training CT images to reduce the gap between CT and CBCT distributions. The proposed adversarial networks reach DSCs of 0.787, 0.447, and 0.660 for the bladder, rectum, and prostate respectively, which is an improvement over the DSCs of 0.749, 0.179, and 0.629 for "source only" training. Our brightness-based data augmentation reaches DSCs of 0.837, 0.701, and 0.734, which outperforms the morphons registration algorithms for the bladder (0.813) and rectum (0.653), while performing similarly on the prostate (0.731). The proposed adversarial training framework can be used for any segmentation application where training and test distributions differ. Our intensity-based data augmentation can be used for CBCT segmentation to help achieve the prescribed dose on target and lower the dose delivered to healthy organs
Using planning CTs to enhance CNN-based bladder segmentation on Cone Beam CT
For prostate cancer patients, large organ deformations occurring between the sessions of a fractionated radiotherapytreatment lead to uncertainties in the doses delivered to the tumour and the surrounding organs at risk. Thesegmentation of those structures in cone beam CT (CBCT) volumes acquired before every treatment sessionis desired to reduce those uncertainties. In this work, we perform a fully automatic bladder segmentation ofCBCT volumes with u-net, a 3D fully convolutional neural network (FCN). Since annotations are hard to collectfor CBCT volumes, we consider augmenting the training dataset with annotated CT volumes and show that itimproves the segmentation performance.Our network is trained and tested on 48 annotated CBCT volumes using a 6-fold cross-validation scheme.The network reaches a mean Dice similarity coefficient (DSC) of0.801±0.137 with 32 training CBCT volumes.This result improves to0.848±0.085 when the training set is augmented with 64 CT volumes. The segmentationaccuracy increases both with the number of CBCT and CT volumes in the training set. As a comparison, thestate-of-the-art deformable image registration (DIR) contour propagation between planning CT and daily CBCTavailable in RayStation reaches a DSC of0.744±0.144 on the same dataset, which is below our FCN result