1,331 research outputs found

    Segmentation of head and neck tumours using modified U-net

    Get PDF
    A new neural network for automatic head and neck cancer (HNC) segmentation from magnetic resonance imaging (MRI) is presented. The proposed neural network is based on U-net, which combines features from different resolutions to achieve end-to-end locating and segmentation of medical images. In this work, the dilated convolution is introduced into U-net, to obtain larger receptive field so that extract multi-scale features. Also, this network uses Dice loss to reduce the imbalance between classes. The proposed algorithm is trained and tested on real MRI data. The cross-validation results show that the new network outperformed the original Unet by 5% (Dice score) on head and neck tumour segmentation

    Fully automated clinical target volume segmentation for glioblastoma radiotherapy using a deep convolutional neural network

    Get PDF
    Purpose: Target volume delineation is a crucial step prior to radiotherapy planning in radiotherapy for glioblastoma. This step is performed manually, which is time-consuming and prone to intra- and inter-rater variabilities. Therefore, the purpose of this study is to evaluate a deep convolutional neural network (CNN) model for automatic segmentation of clinical target volume (CTV) in glioblastoma patients. Material and methods: In this study, the modified Segmentation-Net (SegNet) model with deep supervision and residual-based skip connection mechanism was trained on 259 glioblastoma patients from the Multimodal Brain Tumour Image Segmentation Benchmark (BraTS) 2019 Challenge dataset for segmentation of gross tumour volume (GTV). Then, the pre-trained CNN model was fine-tuned with an independent clinical dataset (n = 37) to perform the CTV segmentation. In the process of fine-tuning, to generate a CT segmentation mask, both CT and MRI scans were simultaneously used as input data. The performance of the CNN model in terms of segmentation accuracy was evaluated on an independent clinical test dataset (n = 15) using the Dice Similarity Coefficient (DSC) and Hausdorff distance. The impact of auto-segmented CTV definition on dosimetry was also analysed. Results: The proposed model achieved the segmentation results with a DSC of 89.60 ± 3.56% and Hausdorff distance of 1.49 ± 0.65 mm. A statistically significant difference was found for the Dmin and Dmax of the CTV between manually and automatically planned doses. Conclusions: The results of our study suggest that our CNN-based auto-contouring system can be used for segmentation of CTVs to facilitate the brain tumour radiotherapy workflow

    Intensity modulated radiation therapy and arc therapy: validation and evolution as applied to tumours of the head and neck, abdominal and pelvic regions

    Get PDF
    Intensiteitsgemoduleerde radiotherapie (IMRT) laat een betere controle over de dosisdistributie (DD) toe dan meer conventionele bestralingstechnieken. Zo is het met IMRT mogelijk om concave DDs te bereiken en om de risico-organen conformeel uit te sparen. IMRT werd in het UZG klinisch toegepast voor een hele waaier van tumorlocalisaties. De toepassing van IMRT voor de bestraling van hoofd- en halstumoren (HHT) vormt het onderwerp van het eerste deel van deze thesis. De planningsstrategie voor herbestralingen en bestraling van HHT, uitgaande van de keel en de mondholte wordt beschreven, evenals de eerste klinische resultaten hiervan. IMRT voor tumoren van de neus(bij)holten leidt tot minstens even goede lokale controle (LC) en overleving als conventionele bestralingstechnieken, en dit zonder stralingsgeïnduceerde blindheid. IMRT leidt dus tot een gunstiger toxiciteitprofiel maar heeft nog geen bewijs kunnen leveren van een gunstig effect op LC of overleving. De meeste hervallen van HHT worden gezien in het gebied dat tot een hoge dosis bestraald werd, wat erop wijst dat deze “hoge dosis” niet volstaat om alle clonogene tumorcellen uit te schakelen. We startten een studie op, om de mogelijkheid van dosisescalatie op geleide van biologische beeldvorming uit te testen. Naast de toepassing en klinische validatie van IMRT bestond het werk in het kader van deze thesis ook uit de ontwikkeling en het klinisch opstarten van intensiteitgemoduleerde arc therapie (IMAT). IMAT is een rotationele vorm van IMRT (d.w.z. de gantry draait rond tijdens de bestraling), waarbij de modulatie van de intensiteit bereikt wordt door overlappende arcs. IMAT heeft enkele duidelijke voordelen ten opzichte van IMRT in bepaalde situaties. Als het doelvolume concaaf rond een risico-orgaan ligt met een grote diameter, biedt IMAT eigenlijk een oneindig aantal bundelrichtingen aan. Een planningsstrategie voor IMAT werd ontwikkeld, en type-oplossingen voor totaal abdominale bestraling en rectumbestraling werden onderzocht en klinisch toegepast

    An Anatomy-aware Framework for Automatic Segmentation of Parotid Tumor from Multimodal MRI

    Full text link
    Magnetic Resonance Imaging (MRI) plays an important role in diagnosing the parotid tumor, where accurate segmentation of tumors is highly desired for determining appropriate treatment plans and avoiding unnecessary surgery. However, the task remains nontrivial and challenging due to ambiguous boundaries and various sizes of the tumor, as well as the presence of a large number of anatomical structures around the parotid gland that are similar to the tumor. To overcome these problems, we propose a novel anatomy-aware framework for automatic segmentation of parotid tumors from multimodal MRI. First, a Transformer-based multimodal fusion network PT-Net is proposed in this paper. The encoder of PT-Net extracts and fuses contextual information from three modalities of MRI from coarse to fine, to obtain cross-modality and multi-scale tumor information. The decoder stacks the feature maps of different modalities and calibrates the multimodal information using the channel attention mechanism. Second, considering that the segmentation model is prone to be disturbed by similar anatomical structures and make wrong predictions, we design anatomy-aware loss. By calculating the distance between the activation regions of the prediction segmentation and the ground truth, our loss function forces the model to distinguish similar anatomical structures with the tumor and make correct predictions. Extensive experiments with MRI scans of the parotid tumor showed that our PT-Net achieved higher segmentation accuracy than existing networks. The anatomy-aware loss outperformed state-of-the-art loss functions for parotid tumor segmentation. Our framework can potentially improve the quality of preoperative diagnosis and surgery planning of parotid tumors.Comment: under revie

    A generative adversarial network approach to synthetic-CT creation for MRI-based radiation therapy

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Radiações em Diagnóstico e Terapia), Universidade de Lisboa, Faculdade de Ciências, 2019This project presents the application of a generative adversarial network (GAN) to the creation of synthetic computed tomography (sCT) scans from volumetric T1-weighted magnetic resonance imaging (MRI), for dose calculation in MRI-based radio therapy workflows. A 3-dimensional GAN for MRI-to-CT synthesis was developed based on a 2-dimensional architecture for image-content transfer. Co-registered CT and T1 –weighted MRI scans of the head region were used for training. Tuning of the network was performed with a 7-foldcross-validation method on 42 patients. A second data set of 12 patients was used as the hold out data set for final validation. The performance of the GAN was assessed with image quality metrics, and dosimetric evaluation was performed for 33 patients by comparing dose distributions calculated on true and synthetic CT, for photon and proton therapy plans. sCT generation time was <30s per patient. The mean absolute error (MAE) between sCT and CT on the cross-validation data set was69 ± 10 HU, corresponding to a 20% decrease in error when compared to training on the original 2D GAN. Quality metric results did not differ statistically for the hold out data set (p = 0.09). Higher errors were observed for air and bone voxels, and registration errors between CT and MRI decreased performance of the algorithm. Dose deviations at the target were within 2% for the photon beams; for the proton plans, 21 patients showed dose deviations under 2%, while 12 had deviations between 2% and 8%. Pass rates (2%/ 2mm) between dose distributions were higher than 98% and 94% for photon and proton plans respectively. The results compare favorably with published algorithms and the method shows potential for MRI-guided clinical workflows. Special attention should be given when beams cross small structures and airways, and further adjustments to the algorithm should be made to increase performance for these regions

    Evaluering av maskinlæringsmetoder for automatisk tumorsegmentering

    Get PDF
    The definition of target volumes and organs at risk (OARs) is a critical part of radiotherapy planning. In routine practice, this is typically done manually by clinical experts who contour the structures in medical images prior to dosimetric planning. This is a time-consuming and labor-intensive task. Moreover, manual contouring is inherently a subjective task and substantial contour variability can occur, potentially impacting on radiotherapy treatment and image-derived biomarkers. Automatic segmentation (auto-segmentation) of target volumes and OARs has the potential to save time and resources while reducing contouring variability. Recently, auto-segmentation of OARs using machine learning methods has been integrated into the clinical workflow by several institutions and such tools have been made commercially available by major vendors. The use of machine learning methods for auto-segmentation of target volumes including the gross tumor volume (GTV) is less mature at present but is the focus of extensive ongoing research. The primary aim of this thesis was to investigate the use of machine learning methods for auto-segmentation of the GTV in medical images. Manual GTV contours constituted the ground truth in the analyses. Volumetric overlap and distance-based metrics were used to quantify auto-segmentation performance. Four different image datasets were evaluated. The first dataset, analyzed in papers I–II, consisted of positron emission tomography (PET) and contrast-enhanced computed tomography (ceCT) images of 197 patients with head and neck cancer (HNC). The ceCT images of this dataset were also included in paper IV. Two datasets were analyzed separately in paper III, namely (i) PET, ceCT, and low-dose CT (ldCT) images of 86 patients with anal cancer (AC), and (ii) PET, ceCT, ldCT, and T2 and diffusion-weighted (T2W and DW, respectively) MR images of a subset (n = 36) of the aforementioned AC patients. The last dataset consisted of ceCT images of 36 canine patients with HNC and was analyzed in paper IV. In paper I, three approaches to auto-segmentation of the GTV in patients with HNC were evaluated and compared, namely conventional PET thresholding, classical machine learning algorithms, and deep learning using a 2-dimensional (2D) U-Net convolutional neural network (CNN). For the latter two approaches the effect of imaging modality on auto-segmentation performance was also assessed. Deep learning based on multimodality PET/ceCT image input resulted in superior agreement with the manual ground truth contours, as quantified by geometric overlap and distance-based performance evaluation metrics calculated on a per patient basis. Moreover, only deep learning provided adequate performance for segmentation based solely on ceCT images. For segmentation based on PET-only, all three approaches provided adequate segmentation performance, though deep learning ranked first, followed by classical machine learning, and PET thresholding. In paper II, deep learning-based auto-segmentation of the GTV in patients with HNC using a 2D U-Net architecture was evaluated more thoroughly by introducing new structure-based performance evaluation metrics and including qualitative expert evaluation of the resulting auto-segmentation quality. As in paper I, multimodal PET/ceCT image input provided superior segmentation performance, compared to the single modality CNN models. The structure-based metrics showed quantitatively that the PET signal was vital for the sensitivity of the CNN models, as the superior PET/ceCT-based model identified 86 % of all malignant GTV structures whereas the ceCT-based model only identified 53 % of these structures. Furthermore, the majority of the qualitatively evaluated auto-segmentations (~ 90 %) generated by the best PET/ceCT-based CNN were given a quality score corresponding to substantial clinical value. Based on papers I and II, deep learning with multimodality PET/ceCT image input would be the recommended approach for auto-segmentation of the GTV in human patients with HNC. In paper III, deep learning-based auto-segmentation of the GTV in patients with AC was evaluated for the first time, using a 2D U-Net architecture. Furthermore, an extensive comparison of the impact of different single modality and multimodality combinations of PET, ceCT, ldCT, T2W, and/or DW image input on quantitative auto-segmentation performance was conducted. For both the 86-patient and 36-patient datasets, the models based on PET/ceCT provided the highest mean overlap with the manual ground truth contours. For this task, however, comparable auto-segmentation quality was obtained for solely ceCT-based CNN models. The CNN model based solely on T2W images also obtained acceptable auto-segmentation performance and was ranked as the second-best single modality model for the 36-patient dataset. These results indicate that deep learning could prove a versatile future tool for auto-segmentation of the GTV in patients with AC. Paper IV investigated for the first time the applicability of deep learning-based auto-segmentation of the GTV in canine patients with HNC, using a 3-dimensional (3D) U-Net architecture and ceCT image input. A transfer learning approach where CNN models were pre-trained on the human HNC data and subsequently fine-tuned on canine data was compared to training models from scratch on canine data. These two approaches resulted in similar auto-segmentation performances, which on average was comparable to the overlap metrics obtained for ceCT-based auto-segmentation in human HNC patients. Auto-segmentation in canine HNC patients appeared particularly promising for nasal cavity tumors, as the average overlap with manual contours was 25 % higher for this subgroup, compared to the average for all included tumor sites. In conclusion, deep learning with CNNs provided high-quality GTV autosegmentations for all datasets included in this thesis. In all cases, the best-performing deep learning models resulted in an average overlap with manual contours which was comparable to the reported interobserver agreements between human experts performing manual GTV contouring for the given cancer type and imaging modality. Based on these findings, further investigation of deep learning-based auto-segmentation of the GTV in the given diagnoses would be highly warranted.Definisjon av målvolum og risikoorganer er en kritisk del av planleggingen av strålebehandling. I praksis gjøres dette vanligvis manuelt av kliniske eksperter som tegner inn strukturenes konturer i medisinske bilder før dosimetrisk planlegging. Dette er en tids- og arbeidskrevende oppgave. Manuell inntegning er også subjektiv, og betydelig variasjon i inntegnede konturer kan forekomme. Slik variasjon kan potensielt påvirke strålebehandlingen og bildebaserte biomarkører. Automatisk segmentering (auto-segmentering) av målvolum og risikoorganer kan potensielt spare tid og ressurser samtidig som konturvariasjonen reduseres. Autosegmentering av risikoorganer ved hjelp av maskinlæringsmetoder har nylig blitt implementert som del av den kliniske arbeidsflyten ved flere helseinstitusjoner, og slike verktøy er kommersielt tilgjengelige hos store leverandører av medisinsk teknologi. Auto-segmentering av målvolum inkludert tumorvolumet gross tumor volume (GTV) ved hjelp av maskinlæringsmetoder er per i dag mindre teknologisk modent, men dette området er fokus for omfattende pågående forskning. Hovedmålet med denne avhandlingen var å undersøke bruken av maskinlæringsmetoder for auto-segmentering av GTV i medisinske bilder. Manuelle GTVinntegninger utgjorde grunnsannheten (the ground truth) i analysene. Mål på volumetrisk overlapp og avstand mellom sanne og predikerte konturer ble brukt til å kvantifisere kvaliteten til de automatisk genererte GTV-konturene. Fire forskjellige bildedatasett ble evaluert. Det første datasettet, analysert i artikkel I–II, bestod av positronemisjonstomografi (PET) og kontrastforsterkede computertomografi (ceCT) bilder av 197 pasienter med hode/halskreft. ceCT-bildene i dette datasettet ble også inkludert i artikkel IV. To datasett ble analysert separat i artikkel III, nemlig (i) PET, ceCT og lavdose CT (ldCT) bilder av 86 pasienter med analkreft, og (ii) PET, ceCT, ldCT og T2- og diffusjonsvektet (henholdsvis T2W og DW) MR-bilder av en undergruppe (n = 36) av de ovennevnte analkreftpasientene. Det siste datasettet, som bestod av ceCT-bilder av 36 hunder med hode/halskreft, ble analysert i artikkel IV

    Imaging of Tumour Microenvironment for the Planning of Oncological Therapies Using Positron Emission Tomography

    Get PDF
    Tumour cells differ from normal tissue cells in several important ways. These differences, like for example changed energy metabolism, result in altered microenvironment of malignant tumours. Non-invasive imaging of tumour microenvironment has been at the centre of intense research recently due to the important role that this changed environement plays in the development of malignant tumours and due to the role it plays in the treatment of these tumours. In this respect, perhaps the most important characteristics of the tumour microenvironment from this point of view are the lack of oxygen or hypoxia and changes in blood flow (BF). The purpose of this thesis was to investigate the processes of energy metabolism, BF and oxygenation in head and neck cancer and pancreatic tumours and to explore the possibilities of improving the methods for their quantification using positron emission tomography (PET). To this end [18F]EF5, a new PET tracer for detection of tumour hypoxia was investigated. Favourable uptake properties of the tracer were observed. In addition, it was established that the uptake of this tracer does not correlate with the uptake of existing tracers for the imaging of energy metabolism and BF, so the information about the presence of tissue hypoxia cannot therefore be obtained using tracers such as [18F]FDG or [15O]H2O. These results were complemented by the results of the follow-up study in which it was shown that the uptake of [18F]EF5 in head and neck tumours prior to treatment is also associated with the overall survival of the patients, indicating that tumour hypoxia is a negative prognostic factor and might be associated with therapeutic resistance. The influences of energy metabolism and BF on the survival of patients with pancreatic cancer were investigated in the second study. The results indicate that the best predictor of survival of patients with pancreatic cancer is the relationship between energy metabolism and BF. These results suggest that the cells with high metabolic activity in a hypoperfused tissue have the most aggressive phenotype.Siirretty Doriast

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces
    corecore