4 research outputs found

    Automatic gross tumor segmentation of canine head and neck cancer using deep learning and cross-species transfer learning

    Get PDF
    BackgroundRadiotherapy (RT) is increasingly being used on dogs with spontaneous head and neck cancer (HNC), which account for a large percentage of veterinary patients treated with RT. Accurate definition of the gross tumor volume (GTV) is a vital part of RT planning, ensuring adequate dose coverage of the tumor while limiting the radiation dose to surrounding tissues. Currently the GTV is contoured manually in medical images, which is a time-consuming and challenging task.PurposeThe purpose of this study was to evaluate the applicability of deep learning-based automatic segmentation of the GTV in canine patients with HNC.Materials and methodsContrast-enhanced computed tomography (CT) images and corresponding manual GTV contours of 36 canine HNC patients and 197 human HNC patients were included. A 3D U-Net convolutional neural network (CNN) was trained to automatically segment the GTV in canine patients using two main approaches: (i) training models from scratch based solely on canine CT images, and (ii) using cross-species transfer learning where models were pretrained on CT images of human patients and then fine-tuned on CT images of canine patients. For the canine patients, automatic segmentations were assessed using the Dice similarity coefficient (Dice), the positive predictive value, the true positive rate, and surface distance metrics, calculated from a four-fold cross-validation strategy where each fold was used as a validation set and test set once in independent model runs.ResultsCNN models trained from scratch on canine data or by using transfer learning obtained mean test set Dice scores of 0.55 and 0.52, respectively, indicating acceptable auto-segmentations, similar to the mean Dice performances reported for CT-based automatic segmentation in human HNC studies. Automatic segmentation of nasal cavity tumors appeared particularly promising, resulting in mean test set Dice scores of 0.69 for both approaches.ConclusionIn conclusion, deep learning-based automatic segmentation of the GTV using CNN models based on canine data only or a cross-species transfer learning approach shows promise for future application in RT of canine HNC patients

    Pixel classification methods for identifying and quantifying leaf surface injury from digital images

    Full text link
    Plants exposed to stress due to pollution, disease or nutrient deficiency often develop visible symptoms on leaves such as spots, colour changes and necrotic regions. Early symptom detection is important for precision agriculture, environmental monitoring using bio-indicators and quality assessment of leafy vegetables. Leaf injury is usually assessed by visual inspection, which is labour-intensive and to a consid- erable extent subjective. In this study, methods for classifying individual pixels as healthy or injured from images of clover leaves exposed to the air pollutant ozone were tested and compared. RGB images of the leaves were acquired under controlled conditions in a laboratory using a standard digital SLR camera. Different feature vectors were extracted from the images by including different colour and texture (spa- tial) information. Four approaches to classification were evaluated: (1) Fit to a Pattern Multivariate Image Analysis (FPM) combined with T2 statistics (FPM-T2) or (2) Residual Sum of Squares statistics (FPM-RSS), (3) linear discriminant analysis (LDA) and (4) K-means clustering. The predicted leaf pixel classifications were trained from and compared to manually segmented images to evaluate classification performance. The LDA classifier outperformed the three other approaches in pixel identification with significantly higher accuracy, precision, true positive rate and F-score and significantly lower false positive rate and computation time. A feature vector of single pixel colour channel intensities was sufficient for capturing the information relevant for pixel identification. Including neighbourhood pixel information in the feature vector did not improve performance, but significantly increased the computation time. The LDA classifier was robust with 95% mean accuracy, 83% mean true positive rate and 2% mean false positive rate, indicating that it has potential for real-time applications.Opstad Kruse, OM.; Prats Montalbán, JM.; Indahl, UG.; Kvaal, K.; Ferrer Riquelme, AJ.; Futsaether, CM. (2014). Pixel classification methods for identifying and quantifying leaf surface injury from digital images. Computers and Electronics in Agriculture. 108:155-165. doi:10.1016/j.compag.2014.07.010S15516510

    Head and neck cancer treatment outcome prediction:a comparison between machine learning with conventional radiomics features and deep learning radiomics

    No full text
    Background: Radiomics can provide in-depth characterization of cancers for treatment outcome prediction. Conventional radiomics rely on extraction of image features within a pre-defined image region of interest (ROI) which are typically fed to a classification algorithm for prediction of a clinical endpoint. Deep learning radiomics allows for a simpler workflow where images can be used directly as input to a convolutional neural network (CNN) with or without a pre-defined ROI. Purpose: The purpose of this study was to evaluate (i) conventional radiomics and (ii) deep learning radiomics for predicting overall survival (OS) and disease-free survival (DFS) for patients with head and neck squamous cell carcinoma (HNSCC) using pre-treatment 18F-fluorodeoxuglucose positron emission tomography (FDG PET) and computed tomography (CT) images. Materials and methods: FDG PET/CT images and clinical data of patients with HNSCC treated with radio(chemo)therapy at Oslo University Hospital (OUS; n = 139) and Maastricht University Medical Center (MAASTRO; n = 99) were collected retrospectively. OUS data was used for model training and initial evaluation. MAASTRO data was used for external testing to assess cross-institutional generalizability. Models trained on clinical and/or conventional radiomics features, with or without feature selection, were compared to CNNs trained on PET/CT images without or with the gross tumor volume (GTV) included. Model performance was measured using accuracy, area under the receiver operating characteristic curve (AUC), Matthew’s correlation coefficient (MCC), and the F1 score calculated for both classes separately. Results: CNNs trained directly on images achieved the highest performance on external data for both endpoints. Adding both clinical and radiomics features to these image-based models increased performance further. Conventional radiomics including clinical data could achieve competitive performance. However, feature selection on clinical and radiomics data lead to overfitting and poor cross-institutional generalizability. CNNs without tumor and node contours achieved close to on-par performance with CNNs including contours. Conclusion: High performance and cross-institutional generalizability can be achieved by combining clinical data, radiomics features and medical images together with deep learning models. However, deep learning models trained on images without contours can achieve competitive performance and could see potential use as an initial screening tool for high-risk patients
    corecore