8,928 research outputs found

    Segment anything model for head and neck tumor segmentation with CT, PET and MRI multi-modality images

    Full text link
    Deep learning presents novel opportunities for the auto-segmentation of gross tumor volume (GTV) in head and neck cancer (HNC), yet fully automatic methods usually necessitate significant manual refinement. This study investigates the Segment Anything Model (SAM), recognized for requiring minimal human prompting and its zero-shot generalization ability across natural images. We specifically examine MedSAM, a version of SAM fine-tuned with large-scale public medical images. Despite its progress, the integration of multi-modality images (CT, PET, MRI) for effective GTV delineation remains a challenge. Focusing on SAM's application in HNC GTV segmentation, we assess its performance in both zero-shot and fine-tuned scenarios using single (CT-only) and fused multi-modality images. Our study demonstrates that fine-tuning SAM significantly enhances its segmentation accuracy, building upon the already effective zero-shot results achieved with bounding box prompts. These findings open a promising avenue for semi-automatic HNC GTV segmentation

    Automatic Segmentation of the Mandible for Three-Dimensional Virtual Surgical Planning

    Get PDF
    Three-dimensional (3D) medical imaging techniques have a fundamental role in the field of oral and maxillofacial surgery (OMFS). 3D images are used to guide diagnosis, assess the severity of disease, for pre-operative planning, per-operative guidance and virtual surgical planning (VSP). In the field of oral cancer, where surgical resection requiring the partial removal of the mandible is a common treatment, resection surgery is often based on 3D VSP to accurately design a resection plan around tumor margins. In orthognathic surgery and dental implant surgery, 3D VSP is also extensively used to precisely guide mandibular surgery. Image segmentation from the radiography images of the head and neck, which is a process to create a 3D volume of the target tissue, is a useful tool to visualize the mandible and quantify geometric parameters. Studies have shown that 3D VSP requires accurate segmentation of the mandible, which is currently performed by medical technicians. Mandible segmentation was usually done manually, which is a time-consuming and poorly reproducible process. This thesis presents four algorithms for mandible segmentation from CT and CBCT and contributes to some novel ideas for the development of automatic mandible segmentation for 3D VSP. We implement the segmentation approaches on head and neck CT/CBCT datasets and then evaluate the performance. Experimental results show that our proposed approaches for mandible segmentation in CT/CBCT datasets exhibit high accuracy

    Contour-Driven Atlas-Based Segmentation

    Get PDF
    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images

    Deep Learning vs. Atlas-Based Models for Fast Auto-Segmentation of the Masticatory Muscles on Head and Neck CT Images

    Get PDF
    BACKGROUND: Impaired function of masticatory muscles will lead to trismus. Routine delineation of these muscles during planning may improve dose tracking and facilitate dose reduction resulting in decreased radiation-related trismus. This study aimed to compare a deep learning model with a commercial atlas-based model for fast auto-segmentation of the masticatory muscles on head and neck computed tomography (CT) images. MATERIAL AND METHODS: Paired masseter (M), temporalis (T), medial and lateral pterygoid (MP, LP) muscles were manually segmented on 56 CT images. CT images were randomly divided into training (n = 27) and validation (n = 29) cohorts. Two methods were used for automatic delineation of masticatory muscles (MMs): Deep learning auto-segmentation (DLAS) and atlas-based auto-segmentation (ABAS). The automatic algorithms were evaluated using Dice similarity coefficient (DSC), recall, precision, Hausdorff distance (HD), HD95, and mean surface distance (MSD). A consolidated score was calculated by normalizing the metrics against interobserver variability and averaging over all patients. Differences in dose (∆Dose) to MMs for DLAS and ABAS segmentations were assessed. A paired t-test was used to compare the geometric and dosimetric difference between DLAS and ABAS methods. RESULTS: DLAS outperformed ABAS in delineating all MMs (p \u3c 0.05). The DLAS mean DSC for M, T, MP, and LP ranged from 0.83 ± 0.03 to 0.89 ± 0.02, the ABAS mean DSC ranged from 0.79 ± 0.05 to 0.85 ± 0.04. The mean value for recall, HD, HD95, MSD also improved with DLAS for auto-segmentation. Interobserver variation revealed the highest variability in DSC and MSD for both T and MP, and the highest scores were achieved for T by both automatic algorithms. With few exceptions, the mean ∆D98%, ∆D95%, ∆D50%, and ∆D2% for all structures were below 10% for DLAS and ABAS and had no detectable statistical difference (P \u3e 0.05). DLAS based contours had dose endpoints more closely matched with that of the manually segmented when compared with ABAS. CONCLUSIONS: DLAS auto-segmentation of masticatory muscles for the head and neck radiotherapy had improved segmentation accuracy compared with ABAS with no qualitative difference in dosimetric endpoints compared to manually segmented contours

    HNT-AI:An Automatic Segmentation Framework for Head and Neck Primary Tumors and Lymph Nodes in FDG- PET/CT Images

    Get PDF
    Head and neck cancer is one of the most prevalent cancers in the world. Automatic delineation of primary tumors and lymph nodes is important for cancer diagnosis and treatment. In this paper, we develop a deep learning-based model for automatic tumor segmentation, HNT-AI, using PET/CT images provided by the MICCAI 2022 Head and Neck Tumor (HECKTOR) segmentation Challenge. We investigate the effect of residual blocks, squeeze-and-excitation normalization, and grid-attention gates on the performance of 3D-UNET. We project the predicted masks on the z-axis and apply k-means clustering to reduce the number of false positive predictions. Our proposed HNT-AI segmentation framework achieves an aggregated dice score of 0.774 and 0.759 for primary tumors and lymph nodes, respectively, on the unseen external test set. Qualitative analysis of the predicted segmentation masks shows that the predicted segmentation mask tends to follow the high standardized uptake value (SUV) area on the PET scans more closely than the ground truth masks. The largest tumor volume, the larget lymph node volume, and the total number of lymph nodes derived from the segmentation proved to be potential biomarkers for recurrence-free survival with a C-index of 0.627 on the test set

    Applying machine learning to automated segmentation of head and neck tumour volumes and organs at risk on radiotherapy planning CT and MRI scans

    Get PDF
    Radiotherapy is one of the main ways head and neck cancers are treated; radiation is used to kill cancerous cells and prevent their recurrence. Complex treatment planning is required to ensure that enough radiation is given to the tumour, and little to other sensitive structures (known as organs at risk) such as the eyes and nerves which might otherwise be damaged. This is especially difficult in the head and neck, where multiple at-risk structures often lie in extremely close proximity to the tumour. It can take radiotherapy experts four hours or more to pick out the important areas on planning scans (known as segmentation). This research will focus on applying machine learning algorithms to automatic segmentation of head and neck planning computed tomography (CT) and magnetic resonance imaging (MRI) scans at University College London Hospital NHS Foundation Trust patients. Through analysis of the images used in radiotherapy DeepMind Health will investigate improvements in efficiency of cancer treatment pathways

    Automatic gross tumor segmentation of canine head and neck cancer using deep learning and cross-species transfer learning

    Get PDF
    BackgroundRadiotherapy (RT) is increasingly being used on dogs with spontaneous head and neck cancer (HNC), which account for a large percentage of veterinary patients treated with RT. Accurate definition of the gross tumor volume (GTV) is a vital part of RT planning, ensuring adequate dose coverage of the tumor while limiting the radiation dose to surrounding tissues. Currently the GTV is contoured manually in medical images, which is a time-consuming and challenging task.PurposeThe purpose of this study was to evaluate the applicability of deep learning-based automatic segmentation of the GTV in canine patients with HNC.Materials and methodsContrast-enhanced computed tomography (CT) images and corresponding manual GTV contours of 36 canine HNC patients and 197 human HNC patients were included. A 3D U-Net convolutional neural network (CNN) was trained to automatically segment the GTV in canine patients using two main approaches: (i) training models from scratch based solely on canine CT images, and (ii) using cross-species transfer learning where models were pretrained on CT images of human patients and then fine-tuned on CT images of canine patients. For the canine patients, automatic segmentations were assessed using the Dice similarity coefficient (Dice), the positive predictive value, the true positive rate, and surface distance metrics, calculated from a four-fold cross-validation strategy where each fold was used as a validation set and test set once in independent model runs.ResultsCNN models trained from scratch on canine data or by using transfer learning obtained mean test set Dice scores of 0.55 and 0.52, respectively, indicating acceptable auto-segmentations, similar to the mean Dice performances reported for CT-based automatic segmentation in human HNC studies. Automatic segmentation of nasal cavity tumors appeared particularly promising, resulting in mean test set Dice scores of 0.69 for both approaches.ConclusionIn conclusion, deep learning-based automatic segmentation of the GTV using CNN models based on canine data only or a cross-species transfer learning approach shows promise for future application in RT of canine HNC patients
    • …
    corecore