630 research outputs found

    Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation

    Full text link
    Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection with PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, there is not an accurate automated segmentation method. Segmentation tends to be done manually by different imaging experts and it is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a multimodal spatial attention module (MSAM) that automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) for segmentation of areas with higher tumor likelihood. Our MSAM can be applied to common backbone architectures and trained end-to-end. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of the MSAM in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC)

    Topology polymorphism graph for lung tumor segmentation in PET-CT images

    Get PDF
    Accurate lung tumor segmentation is problematic when the tumor boundary or edge, which reflects the advancing edge of the tumor, is difficult to discern on chest CT or PET. We propose a ‘topo-poly’ graph model to improve identification of the tumor extent. Our model incorporates an intensity graph and a topology graph. The intensity graph provides the joint PET-CT foreground similarity to differentiate the tumor from surrounding tissues. The topology graph is defined on the basis of contour tree to reflect the inclusion and exclusion relationship of regions. By taking into account different topology relations, the edges in our model exhibit topological polymorphism. These polymorphic edges in turn affect the energy cost when crossing different topology regions under a random walk framework, and hence contribute to appropriate tumor delineation. We validated our method on 40 patients with non-small cell lung cancer where the tumors were manually delineated by a clinical expert. The studies were separated into an ‘isolated’ group (n = 20) where the lung tumor was located in the lung parenchyma and away from associated structures / tissues in the thorax and a ‘complex’ group (n = 20) where the tumor abutted / involved a variety of adjacent structures and had heterogeneous FDG uptake. The methods were validated using Dice’s similarity coefficient (DSC) to measure the spatial volume overlap and Hausdorff distance (HD) to compare shape similarity calculated as the maximum surface distance between the segmentation results and the manual delineations. Our method achieved an average DSC of 0.881  ±  0.046 and HD of 5.311  ±  3.022 mm for the isolated cases and DSC of 0.870  ±  0.038 and HD of 9.370  ±  3.169 mm for the complex cases. Student’s t-test showed that our model outperformed the other methods (p-values <0.05)

    A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging

    Full text link

    Comparative Study With New Accuracy Metrics for Target Volume Contouring in PET Image Guided Radiation Therapy

    Get PDF
    [EN] The impact of positron emission tomography (PET) on radiation therapy is held back by poor methods of defining functional volumes of interest. Many new software tools are being proposed for contouring target volumes but the different approaches are not adequately compared and their accuracy is poorly evaluated due to the ill-definition of ground truth. This paper compares the largest cohort to date of established, emerging and proposed PET contouring methods, in terms of accuracy and variability. We emphasize spatial accuracy and present a new metric that addresses the lack of unique ground truth. Thirty methods are used at 13 different institutions to contour functional volumes of interest in clinical PET/CT and a custom-built PET phantom representing typical problems in image guided radiotherapy. Contouring methods are grouped according to algorithmic type, level of interactivity and how they exploit structural information in hybrid images. Experiments reveal benefits of high levels of user interaction, as well as simultaneous visualization of CT images and PET gradients to guide interactive procedures. Method-wise evaluation identifies the danger of over-automation and the value of prior knowledge built into an algorithm.For retrospective patient data and manual ground truth delineation, the authors wish to thank S. Suilamo, K. Lehtio, M. Mokka, and H. Minn at the Department of Oncology and Radiotherapy, Turku University Hospital, Finland. This study was funded by the Finnish Cancer Organisations.Shepherd, T.; Teräs, M.; Beichel, RR.; Boellaard, R.; Bruynooghe, M.; Dicken, V.; Gooding, MJ.... (2012). Comparative Study With New Accuracy Metrics for Target Volume Contouring in PET Image Guided Radiation Therapy. IEEE Transactions on Medical Imaging. 31(12):2006-2024. doi:10.1109/TMI.2012.2202322S20062024311

    Random Walk and Graph Cut for Co-Segmentation of Lung Tumor on PET-CT Images

    Full text link

    Validation of 3\u27-deoxy-3\u27-[18F]-fluorothymidine positron emission tomography for image-guidance in biologically adaptive radiotherapy

    Get PDF
    Accelerated tumor cell repopulation during radiation therapy is one of the leading causes for low survival rates of head-and-neck cancer patients. The therapeutic effectiveness of radiotherapy could be improved by selectively targeting proliferating tumor subvolumes with higher doses of radiation. Positron emission tomography (PET) imaging with 3´-deoxy-3´-[18F]-fluorothymidine (FLT) has shown great potential as a non-invasive approach to characterizing the proliferation status of tumors. This thesis focuses on histopathological validation of FLT PET imaging specifically for image-guidance applications in biologically adaptive radiotherapy. The lack of experimental data supporting the use of FLT PET imaging for radiotherapy guidance is addressed by developing a novel methodology for histopathological validation of PET imaging. Using this new approach, the spatial concordance between the intratumoral pattern of FLT uptake and the spatial distribution of cell proliferation is demonstrated in animal tumors. First, a two-dimensional analysis is conducted comparing the microscopic FLT uptake as imaged with autoradiography and the distribution of active cell proliferation markers imaged with immunofluorescent microscopy. It was observed that when tumors present a pattern of cell proliferation that is highly dispersed throughout the tumor, even high-resolution imaging modalities such as autoradiography could not accurately determine the extent and spatial distribution of proliferative tumor subvolumes. While microscopic spatial coincidence between high FLT uptake regions and actively proliferative subvolumes was demonstrated in tumors with highly compartmentalized/aggregated features of cell proliferation, there were no conclusive results across the entire set of utilized tumor specimens. This emphasized the need for addressing the limited resolution of FLT PET when imaging microscopic patterns of cell proliferation. This issue was emphasized in the second part of the thesis where the spatial concordance between volumes segmented on FLT simulated FLT PET images and the three dimensional spatial distribution of cell proliferation markers was analyzed

    International EANM-SNMMI-ISMRM consensus recommendation for PET/MRI in oncology

    Get PDF
    The Society of Nuclear Medicine and Molecular Imaging (SNMMI) is an international scientific and professional organization founded in 1954 to promote the science, technology, and practical application of nuclear medicine. The European Association of Nuclear Medicine (EANM) is a professional non-profit medical association that facilitates communication worldwide between individuals pursuing clinical and research excellence in nuclear medicine. The EANM was founded in 1985. The merged International Society for Magnetic Resonance in Medicine (ISMRM) is an international, nonprofit, scientific association whose purpose is to promote communication, research, development, and applications in the field of magnetic resonance in medicine and biology and other related topics and to develop and provide channels and facilities for continuing education in the field.The ISMRM was founded in 1994 through the merger of the Society of Magnetic Resonance in Medicine and the Society of Magnetic Resonance Imaging. SNMMI, ISMRM, and EANM members are physicians, technologists, and scientists specializing in the research and practice of nuclear medicine and/or magnetic resonance imaging. The SNMMI, ISMRM, and EANM will periodically define new guidelines for nuclear medicine practice to help advance the science of nuclear medicine and/or magnetic resonance imaging and to improve the quality of service to patients throughout the world. Existing practice guidelines will be reviewed for revision or renewal, as appropriate, on their fifth anniversary or sooner, if indicated. Each practice guideline, representing a policy statement by the SNMMI/EANM/ISMRM, has undergone a thorough consensus process in which it has been subjected to extensive review. The SNMMI, ISMRM, and EANM recognize that the safe and effective use of diagnostic nuclear medicine imaging and magnetic resonance imaging requires specific training, skills, and techniques, as described in each document. Reproduction or modification of the published practice guideline by those entities not providing these services is not authorized. These guidelines are an educational tool designed to assist practitioners in providing appropriate care for patients. They are not inflexible rules or requirements of practice and are not intended, nor should they be used, to establish a legal standard of care. For these reasons and those set forth below, the SNMMI, the ISMRM, and the EANM caution against the use of these guidelines in litigation in which the clinical decisions of a practitioner are called into question. The ultimate judgment regarding the propriety of any specific procedure or course of action must be made by the physician or medical physicist in light of all the circumstances presented. Thus, there is no implication that an approach differing from the guidelines, standing alone, is below the standard of care. To the contrary, a conscientious practitioner may responsibly adopt a course of action different from that set forth in the guidelines when, in the reasonable judgment of the practitioner, such course of action is indicated by the condition of the patient, limitations of available resources, or advances in knowledge or technology subsequent to publication of the guidelines. The practice of medicine includes both the art and the science of the prevention, diagnosis, alleviation, and treatment of disease. The variety and complexity of human conditions make it impossible to always reach the most appropriate diagnosis or to predict with certainty a particular response to treatment. Therefore, it should be recognized that adherence to these guidelines will not ensure an accurate diagnosis or a successful outcome. All that should be expected is that the practitioner will follow a reasonable course of action based on current knowledge, available resources, and the needs of the patient to deliver effective and safe medical care. The sole purpose of these guidelines is to assist practitioners in achieving this objective

    Evaluering av maskinlæringsmetoder for automatisk tumorsegmentering

    Get PDF
    The definition of target volumes and organs at risk (OARs) is a critical part of radiotherapy planning. In routine practice, this is typically done manually by clinical experts who contour the structures in medical images prior to dosimetric planning. This is a time-consuming and labor-intensive task. Moreover, manual contouring is inherently a subjective task and substantial contour variability can occur, potentially impacting on radiotherapy treatment and image-derived biomarkers. Automatic segmentation (auto-segmentation) of target volumes and OARs has the potential to save time and resources while reducing contouring variability. Recently, auto-segmentation of OARs using machine learning methods has been integrated into the clinical workflow by several institutions and such tools have been made commercially available by major vendors. The use of machine learning methods for auto-segmentation of target volumes including the gross tumor volume (GTV) is less mature at present but is the focus of extensive ongoing research. The primary aim of this thesis was to investigate the use of machine learning methods for auto-segmentation of the GTV in medical images. Manual GTV contours constituted the ground truth in the analyses. Volumetric overlap and distance-based metrics were used to quantify auto-segmentation performance. Four different image datasets were evaluated. The first dataset, analyzed in papers I–II, consisted of positron emission tomography (PET) and contrast-enhanced computed tomography (ceCT) images of 197 patients with head and neck cancer (HNC). The ceCT images of this dataset were also included in paper IV. Two datasets were analyzed separately in paper III, namely (i) PET, ceCT, and low-dose CT (ldCT) images of 86 patients with anal cancer (AC), and (ii) PET, ceCT, ldCT, and T2 and diffusion-weighted (T2W and DW, respectively) MR images of a subset (n = 36) of the aforementioned AC patients. The last dataset consisted of ceCT images of 36 canine patients with HNC and was analyzed in paper IV. In paper I, three approaches to auto-segmentation of the GTV in patients with HNC were evaluated and compared, namely conventional PET thresholding, classical machine learning algorithms, and deep learning using a 2-dimensional (2D) U-Net convolutional neural network (CNN). For the latter two approaches the effect of imaging modality on auto-segmentation performance was also assessed. Deep learning based on multimodality PET/ceCT image input resulted in superior agreement with the manual ground truth contours, as quantified by geometric overlap and distance-based performance evaluation metrics calculated on a per patient basis. Moreover, only deep learning provided adequate performance for segmentation based solely on ceCT images. For segmentation based on PET-only, all three approaches provided adequate segmentation performance, though deep learning ranked first, followed by classical machine learning, and PET thresholding. In paper II, deep learning-based auto-segmentation of the GTV in patients with HNC using a 2D U-Net architecture was evaluated more thoroughly by introducing new structure-based performance evaluation metrics and including qualitative expert evaluation of the resulting auto-segmentation quality. As in paper I, multimodal PET/ceCT image input provided superior segmentation performance, compared to the single modality CNN models. The structure-based metrics showed quantitatively that the PET signal was vital for the sensitivity of the CNN models, as the superior PET/ceCT-based model identified 86 % of all malignant GTV structures whereas the ceCT-based model only identified 53 % of these structures. Furthermore, the majority of the qualitatively evaluated auto-segmentations (~ 90 %) generated by the best PET/ceCT-based CNN were given a quality score corresponding to substantial clinical value. Based on papers I and II, deep learning with multimodality PET/ceCT image input would be the recommended approach for auto-segmentation of the GTV in human patients with HNC. In paper III, deep learning-based auto-segmentation of the GTV in patients with AC was evaluated for the first time, using a 2D U-Net architecture. Furthermore, an extensive comparison of the impact of different single modality and multimodality combinations of PET, ceCT, ldCT, T2W, and/or DW image input on quantitative auto-segmentation performance was conducted. For both the 86-patient and 36-patient datasets, the models based on PET/ceCT provided the highest mean overlap with the manual ground truth contours. For this task, however, comparable auto-segmentation quality was obtained for solely ceCT-based CNN models. The CNN model based solely on T2W images also obtained acceptable auto-segmentation performance and was ranked as the second-best single modality model for the 36-patient dataset. These results indicate that deep learning could prove a versatile future tool for auto-segmentation of the GTV in patients with AC. Paper IV investigated for the first time the applicability of deep learning-based auto-segmentation of the GTV in canine patients with HNC, using a 3-dimensional (3D) U-Net architecture and ceCT image input. A transfer learning approach where CNN models were pre-trained on the human HNC data and subsequently fine-tuned on canine data was compared to training models from scratch on canine data. These two approaches resulted in similar auto-segmentation performances, which on average was comparable to the overlap metrics obtained for ceCT-based auto-segmentation in human HNC patients. Auto-segmentation in canine HNC patients appeared particularly promising for nasal cavity tumors, as the average overlap with manual contours was 25 % higher for this subgroup, compared to the average for all included tumor sites. In conclusion, deep learning with CNNs provided high-quality GTV autosegmentations for all datasets included in this thesis. In all cases, the best-performing deep learning models resulted in an average overlap with manual contours which was comparable to the reported interobserver agreements between human experts performing manual GTV contouring for the given cancer type and imaging modality. Based on these findings, further investigation of deep learning-based auto-segmentation of the GTV in the given diagnoses would be highly warranted.Definisjon av målvolum og risikoorganer er en kritisk del av planleggingen av strålebehandling. I praksis gjøres dette vanligvis manuelt av kliniske eksperter som tegner inn strukturenes konturer i medisinske bilder før dosimetrisk planlegging. Dette er en tids- og arbeidskrevende oppgave. Manuell inntegning er også subjektiv, og betydelig variasjon i inntegnede konturer kan forekomme. Slik variasjon kan potensielt påvirke strålebehandlingen og bildebaserte biomarkører. Automatisk segmentering (auto-segmentering) av målvolum og risikoorganer kan potensielt spare tid og ressurser samtidig som konturvariasjonen reduseres. Autosegmentering av risikoorganer ved hjelp av maskinlæringsmetoder har nylig blitt implementert som del av den kliniske arbeidsflyten ved flere helseinstitusjoner, og slike verktøy er kommersielt tilgjengelige hos store leverandører av medisinsk teknologi. Auto-segmentering av målvolum inkludert tumorvolumet gross tumor volume (GTV) ved hjelp av maskinlæringsmetoder er per i dag mindre teknologisk modent, men dette området er fokus for omfattende pågående forskning. Hovedmålet med denne avhandlingen var å undersøke bruken av maskinlæringsmetoder for auto-segmentering av GTV i medisinske bilder. Manuelle GTVinntegninger utgjorde grunnsannheten (the ground truth) i analysene. Mål på volumetrisk overlapp og avstand mellom sanne og predikerte konturer ble brukt til å kvantifisere kvaliteten til de automatisk genererte GTV-konturene. Fire forskjellige bildedatasett ble evaluert. Det første datasettet, analysert i artikkel I–II, bestod av positronemisjonstomografi (PET) og kontrastforsterkede computertomografi (ceCT) bilder av 197 pasienter med hode/halskreft. ceCT-bildene i dette datasettet ble også inkludert i artikkel IV. To datasett ble analysert separat i artikkel III, nemlig (i) PET, ceCT og lavdose CT (ldCT) bilder av 86 pasienter med analkreft, og (ii) PET, ceCT, ldCT og T2- og diffusjonsvektet (henholdsvis T2W og DW) MR-bilder av en undergruppe (n = 36) av de ovennevnte analkreftpasientene. Det siste datasettet, som bestod av ceCT-bilder av 36 hunder med hode/halskreft, ble analysert i artikkel IV
    • …
    corecore