19 research outputs found

    Multi-level fusion in ultrasound for cancer detection based on uniform LBP features

    Get PDF
    Collective improvement in the acceptable or desirable accuracy level of breast cancer image-related pattern recognition using various schemes remains challenging. Despite the combination of multiple schemes to achieve superior ultrasound image pattern recognition by reducing the speckle noise, an enhanced technique is not achieved. The purpose of this study is to introduce a features-based fusion scheme based on enhancement uniform-Local Binary Pattern (LBP) and filtered noise reduction. To surmount the above limitations and achieve the aim of the study, a new descriptor that enhances the LBP features based on the new threshold has been proposed. This paper proposes a multi-level fusion scheme for the auto-classification of the static ultrasound images of breast cancer, which was attained in two stages. First, several images were generated from a single image using the pre-processing method. The median and Wiener filters were utilized to lessen the speckle noise and enhance the ultrasound image texture. This strategy allowed the extraction of a powerful feature by reducing the overlap between the benign and malignant image classes. Second, the fusion mechanism allowed the production of diverse features from different filtered images. The feasibility of using the LBP-based texture feature to categorize the ultrasound images was demonstrated. The effectiveness of the proposed scheme is tested on 250 ultrasound images comprising 100 and 150 benign and malignant images, respectively. The proposed method achieved very high accuracy (98%), sensitivity (98%), and specificity (99%). As a result, the fusion process that can help achieve a powerful decision based on different features produced from different filtered images improved the results of the new descriptor of LBP features in terms of accuracy, sensitivity, and specificity

    Diagnosing Pilgrimage Common Diseases by Interactive Multimedia Courseware

    Get PDF
    في هذه الدراسة، نحاول تقديم خدمة الرعاية الصحية للحجاج. تصف هذه الدراسة كيف يمكن استخدام مناهج الوسائط المتعددة في جعل الحجاج على علم بالأمراض الشائعة الموجودة في المملكة العربية السعودية أثناء موسم الحج. كما سيتم استخدام البرامج التعليمية للوسائط المتعددة في توفير بعض المعلومات حول أعراض هذه الأمراض، وكيف يمكن علاج كل منها. يحتوي البرنامج التعليمي للوسائط المتعددة على تمثيل افتراضي للمستشفى، وبعض مقاطع الفيديو للحالات الفعلية للمرضى، وأنشطة التعلم الأصيلة التي تهدف إلى تعزيز الكفاءات الصحية أثناء الحج. تم فحص المناهج الدراسية لدراسة الطريقة التي يتم بها تطبيق عناصر المناهج الدراسية في التعلم في الوقت الحقيقي. أكثر من ذلك، في هذا البحث، يتم تقديم مناقشة حول أخطر الأمراض التي قد تحدث خلال موسم الحج. إن استخدام دورة الوسائط المتعددة قادر على توفير المعلومات بشكل فعال وفعال للحجاج حول هذه الأمراض. تؤدي هذه التقنية هذه المهمة باستخدام المعرفة المتراكمة من التجارب السابقة، لا سيما في مجال تشخيص الأمراض والطب والعلاج. تم إنشاء المناهج الدراسية باستخدام أداة تأليف تُعرف باسم مدرب ToolBook لتزويد الحجاج بخدمة عالية الجودة.In this study, we attempt to provide healthcare service to the pilgrims. This study describes how a multimedia courseware can be used in making the pilgrims aware of the common diseases that are present in Saudi Arabia during the pilgrimage. The multimedia courseware will also be used in providing some information about the symptoms of these diseases, and how each of them can be treated. The multimedia courseware contains a virtual representation of a hospital, some videos of actual cases of patients, and authentic learning activities intended to enhance health competencies during the pilgrimage. An examination of the courseware was conducted so as to study the manner in which the elements of the courseware are applied in real-time learning. More so, in this research, a discussion on the most dangerous diseases which may occur during the season of pilgrimage is provided. The use of the multimedia course is able to effectively and efficiently provide information to the pilgrims about these diseases. This technology performs this task by using the knowledge that has been accumulated from past experience, particularly in the field of disease diagnosis, medicine and treatment. The courseware has been created using an authoring tool known as ToolBook instructor to provide pilgrims with quality service

    Evaluering av maskinlæringsmetoder for automatisk tumorsegmentering

    Get PDF
    The definition of target volumes and organs at risk (OARs) is a critical part of radiotherapy planning. In routine practice, this is typically done manually by clinical experts who contour the structures in medical images prior to dosimetric planning. This is a time-consuming and labor-intensive task. Moreover, manual contouring is inherently a subjective task and substantial contour variability can occur, potentially impacting on radiotherapy treatment and image-derived biomarkers. Automatic segmentation (auto-segmentation) of target volumes and OARs has the potential to save time and resources while reducing contouring variability. Recently, auto-segmentation of OARs using machine learning methods has been integrated into the clinical workflow by several institutions and such tools have been made commercially available by major vendors. The use of machine learning methods for auto-segmentation of target volumes including the gross tumor volume (GTV) is less mature at present but is the focus of extensive ongoing research. The primary aim of this thesis was to investigate the use of machine learning methods for auto-segmentation of the GTV in medical images. Manual GTV contours constituted the ground truth in the analyses. Volumetric overlap and distance-based metrics were used to quantify auto-segmentation performance. Four different image datasets were evaluated. The first dataset, analyzed in papers I–II, consisted of positron emission tomography (PET) and contrast-enhanced computed tomography (ceCT) images of 197 patients with head and neck cancer (HNC). The ceCT images of this dataset were also included in paper IV. Two datasets were analyzed separately in paper III, namely (i) PET, ceCT, and low-dose CT (ldCT) images of 86 patients with anal cancer (AC), and (ii) PET, ceCT, ldCT, and T2 and diffusion-weighted (T2W and DW, respectively) MR images of a subset (n = 36) of the aforementioned AC patients. The last dataset consisted of ceCT images of 36 canine patients with HNC and was analyzed in paper IV. In paper I, three approaches to auto-segmentation of the GTV in patients with HNC were evaluated and compared, namely conventional PET thresholding, classical machine learning algorithms, and deep learning using a 2-dimensional (2D) U-Net convolutional neural network (CNN). For the latter two approaches the effect of imaging modality on auto-segmentation performance was also assessed. Deep learning based on multimodality PET/ceCT image input resulted in superior agreement with the manual ground truth contours, as quantified by geometric overlap and distance-based performance evaluation metrics calculated on a per patient basis. Moreover, only deep learning provided adequate performance for segmentation based solely on ceCT images. For segmentation based on PET-only, all three approaches provided adequate segmentation performance, though deep learning ranked first, followed by classical machine learning, and PET thresholding. In paper II, deep learning-based auto-segmentation of the GTV in patients with HNC using a 2D U-Net architecture was evaluated more thoroughly by introducing new structure-based performance evaluation metrics and including qualitative expert evaluation of the resulting auto-segmentation quality. As in paper I, multimodal PET/ceCT image input provided superior segmentation performance, compared to the single modality CNN models. The structure-based metrics showed quantitatively that the PET signal was vital for the sensitivity of the CNN models, as the superior PET/ceCT-based model identified 86 % of all malignant GTV structures whereas the ceCT-based model only identified 53 % of these structures. Furthermore, the majority of the qualitatively evaluated auto-segmentations (~ 90 %) generated by the best PET/ceCT-based CNN were given a quality score corresponding to substantial clinical value. Based on papers I and II, deep learning with multimodality PET/ceCT image input would be the recommended approach for auto-segmentation of the GTV in human patients with HNC. In paper III, deep learning-based auto-segmentation of the GTV in patients with AC was evaluated for the first time, using a 2D U-Net architecture. Furthermore, an extensive comparison of the impact of different single modality and multimodality combinations of PET, ceCT, ldCT, T2W, and/or DW image input on quantitative auto-segmentation performance was conducted. For both the 86-patient and 36-patient datasets, the models based on PET/ceCT provided the highest mean overlap with the manual ground truth contours. For this task, however, comparable auto-segmentation quality was obtained for solely ceCT-based CNN models. The CNN model based solely on T2W images also obtained acceptable auto-segmentation performance and was ranked as the second-best single modality model for the 36-patient dataset. These results indicate that deep learning could prove a versatile future tool for auto-segmentation of the GTV in patients with AC. Paper IV investigated for the first time the applicability of deep learning-based auto-segmentation of the GTV in canine patients with HNC, using a 3-dimensional (3D) U-Net architecture and ceCT image input. A transfer learning approach where CNN models were pre-trained on the human HNC data and subsequently fine-tuned on canine data was compared to training models from scratch on canine data. These two approaches resulted in similar auto-segmentation performances, which on average was comparable to the overlap metrics obtained for ceCT-based auto-segmentation in human HNC patients. Auto-segmentation in canine HNC patients appeared particularly promising for nasal cavity tumors, as the average overlap with manual contours was 25 % higher for this subgroup, compared to the average for all included tumor sites. In conclusion, deep learning with CNNs provided high-quality GTV autosegmentations for all datasets included in this thesis. In all cases, the best-performing deep learning models resulted in an average overlap with manual contours which was comparable to the reported interobserver agreements between human experts performing manual GTV contouring for the given cancer type and imaging modality. Based on these findings, further investigation of deep learning-based auto-segmentation of the GTV in the given diagnoses would be highly warranted.Definisjon av målvolum og risikoorganer er en kritisk del av planleggingen av strålebehandling. I praksis gjøres dette vanligvis manuelt av kliniske eksperter som tegner inn strukturenes konturer i medisinske bilder før dosimetrisk planlegging. Dette er en tids- og arbeidskrevende oppgave. Manuell inntegning er også subjektiv, og betydelig variasjon i inntegnede konturer kan forekomme. Slik variasjon kan potensielt påvirke strålebehandlingen og bildebaserte biomarkører. Automatisk segmentering (auto-segmentering) av målvolum og risikoorganer kan potensielt spare tid og ressurser samtidig som konturvariasjonen reduseres. Autosegmentering av risikoorganer ved hjelp av maskinlæringsmetoder har nylig blitt implementert som del av den kliniske arbeidsflyten ved flere helseinstitusjoner, og slike verktøy er kommersielt tilgjengelige hos store leverandører av medisinsk teknologi. Auto-segmentering av målvolum inkludert tumorvolumet gross tumor volume (GTV) ved hjelp av maskinlæringsmetoder er per i dag mindre teknologisk modent, men dette området er fokus for omfattende pågående forskning. Hovedmålet med denne avhandlingen var å undersøke bruken av maskinlæringsmetoder for auto-segmentering av GTV i medisinske bilder. Manuelle GTVinntegninger utgjorde grunnsannheten (the ground truth) i analysene. Mål på volumetrisk overlapp og avstand mellom sanne og predikerte konturer ble brukt til å kvantifisere kvaliteten til de automatisk genererte GTV-konturene. Fire forskjellige bildedatasett ble evaluert. Det første datasettet, analysert i artikkel I–II, bestod av positronemisjonstomografi (PET) og kontrastforsterkede computertomografi (ceCT) bilder av 197 pasienter med hode/halskreft. ceCT-bildene i dette datasettet ble også inkludert i artikkel IV. To datasett ble analysert separat i artikkel III, nemlig (i) PET, ceCT og lavdose CT (ldCT) bilder av 86 pasienter med analkreft, og (ii) PET, ceCT, ldCT og T2- og diffusjonsvektet (henholdsvis T2W og DW) MR-bilder av en undergruppe (n = 36) av de ovennevnte analkreftpasientene. Det siste datasettet, som bestod av ceCT-bilder av 36 hunder med hode/halskreft, ble analysert i artikkel IV

    INCEPTNET: Precise And Early Disease Detection Application For Medical Images Analyses

    Full text link
    In view of the recent paradigm shift in deep AI based image processing methods, medical image processing has advanced considerably. In this study, we propose a novel deep neural network (DNN), entitled InceptNet, in the scope of medical image processing, for early disease detection and segmentation of medical images in order to enhance precision and performance. We also investigate the interaction of users with the InceptNet application to present a comprehensive application including the background processes, and foreground interactions with users. Fast InceptNet is shaped by the prominent Unet architecture, and it seizes the power of an Inception module to be fast and cost effective while aiming to approximate an optimal local sparse structure. Adding Inception modules with various parallel kernel sizes can improve the network's ability to capture the variations in the scaled regions of interest. To experiment, the model is tested on four benchmark datasets, including retina blood vessel segmentation, lung nodule segmentation, skin lesion segmentation, and breast cancer cell detection. The improvement was more significant on images with small scale structures. The proposed method improved the accuracy from 0.9531, 0.8900, 0.9872, and 0.9881 to 0.9555, 0.9510, 0.9945, and 0.9945 on the mentioned datasets, respectively, which show outperforming of the proposed method over the previous works. Furthermore, by exploring the procedure from start to end, individuals who have utilized a trial edition of InceptNet, in the form of a complete application, are presented with thirteen multiple choice questions in order to assess the proposed method. The outcomes are evaluated through the means of Human Computer Interaction

    Automatic gross tumor segmentation of canine head and neck cancer using deep learning and cross-species transfer learning

    Get PDF
    BackgroundRadiotherapy (RT) is increasingly being used on dogs with spontaneous head and neck cancer (HNC), which account for a large percentage of veterinary patients treated with RT. Accurate definition of the gross tumor volume (GTV) is a vital part of RT planning, ensuring adequate dose coverage of the tumor while limiting the radiation dose to surrounding tissues. Currently the GTV is contoured manually in medical images, which is a time-consuming and challenging task.PurposeThe purpose of this study was to evaluate the applicability of deep learning-based automatic segmentation of the GTV in canine patients with HNC.Materials and methodsContrast-enhanced computed tomography (CT) images and corresponding manual GTV contours of 36 canine HNC patients and 197 human HNC patients were included. A 3D U-Net convolutional neural network (CNN) was trained to automatically segment the GTV in canine patients using two main approaches: (i) training models from scratch based solely on canine CT images, and (ii) using cross-species transfer learning where models were pretrained on CT images of human patients and then fine-tuned on CT images of canine patients. For the canine patients, automatic segmentations were assessed using the Dice similarity coefficient (Dice), the positive predictive value, the true positive rate, and surface distance metrics, calculated from a four-fold cross-validation strategy where each fold was used as a validation set and test set once in independent model runs.ResultsCNN models trained from scratch on canine data or by using transfer learning obtained mean test set Dice scores of 0.55 and 0.52, respectively, indicating acceptable auto-segmentations, similar to the mean Dice performances reported for CT-based automatic segmentation in human HNC studies. Automatic segmentation of nasal cavity tumors appeared particularly promising, resulting in mean test set Dice scores of 0.69 for both approaches.ConclusionIn conclusion, deep learning-based automatic segmentation of the GTV using CNN models based on canine data only or a cross-species transfer learning approach shows promise for future application in RT of canine HNC patients

    Implementation and Training of Convolutional Neural Networks for the Segmentation of Brain Structures

    Get PDF
    Precise delivery of radiotherapy depends on accurate segmentation of the anatomical structures surrounding the cancer tissue. With increasing knowledge of radio-sensitivity of critical brain structures, more detailed contouring of a range of structures is required. Manual segmentation is time-consuming, and research into methods for auto segmentation has advanced in the past decade. This thesis presents a general-purpose convolutional neural network with the U-net architecture for auto-segmenting the brain, brainstem, Papez Circuit, and right hippocampus. Several different models were trained using T1 MRI, T2 MRI, and CT images to compare the performance of models trained with the different modalities. Low-level preprocessing was done to the images before training, and the Dice score measured model performance. The best performing model for segmentation of the full brain resulted in a Dice score of 0.98, whereas the segmentation of the brainstem achieved a Dice score of 0.73. Furthermore, segmentation of the complex structure Papez Circuit attained Dice score of 0.52, and segmentation of the hippocampus resulted in a Dice score of 0.49. The selected model performed well in segmentation of the full brain and decent for the brainstem compared to similar studies. In contrast, the segmentation results for the hippocampus were slightly lower than previously reported results. No comparison was found for the segmentation results of the Papez Circuit. More preprocessing and patient data is necessary to provide accurate segmentation of the smaller structures. The dataset presented a few problems, and it was discovered that a similar acquisition method for image sequences gives better results. The network architecture provides a solid framework for segmentation.Masteroppgave i medisinsk teknologiMTEK39

    Anatomical Classification of the Gastrointestinal Tract Using Ensemble Transfer Learning

    Get PDF
    Endoscopy is a procedure used to visualize disorders of the gastrointestinal (GI) lumen. GI disorders can occur without symptoms, which is why gastroenterologists often recommend routine examinations of the GI tract. It allows a doctor to directly visualize the inside of the GI tract and identify the cause of symptoms, reducing the need for exploratory surgery or other invasive procedures. It can also detect the early stages of GI disorders, such as cancer, enabling prompt treatment that can improve outcomes. Endoscopic examinations generate significant numbers of GI images. Because of this vast amount of endoscopic image data, relying solely on human interpretation can be problematic. Artificial intelligence is gaining popularity in clinical medicine. Assist in medical image analysis and early detection of diseases, help with personalized treatment planning by analyzing a patient’s medical history and genomic data, and be used by surgical robots to improve precision and reduce invasiveness. It enables automated diagnosis, provides physicians with assistance, and may improve performance. One of the significant challenges is defining the specific anatomic locations of GI tract abnormalities. Clinicians can then determine appropriate treatment options, reducing the need for repetitive endoscopy. Due to the difficulty of collecting annotated data, very limited research has been conducted on the localization of anatomical locations by classification of endoscopy images. In this study, we present a classification of GI tract anatomical localization based on transfer learning and ensemble learning. Our approach involves the use of an autoencoder and the Xception model. The autoencoder was initially trained on thousands of unlabeled images, and the encoder then separated and used as a feature extractor. The Xception model was also used as a second model to extract features from the input images. The extracted feature vectors were then concatenated and fed into a Convolutional Neural Network for classification. This combination of models provides a powerful and versatile solution for image classification. By using the encoder as a feature extractor that can transfer the learned knowledge, it is possible to improve learning by allowing the model to focus on more relevant and useful data, which is extremely valuable when there are not enough appropriately labelled data. On the other hand, the Xception model provides additional feature extraction capabilities. Sometimes, one classifier is not enough in machine learning, as it depends on the problem we are trying to solve and the quality and quantity of data available. With ensemble learning, multiple learning networks can work together to create a stronger classifier. The final classification results are obtained by combining the information from both models through the CNN model. This approach demonstrates the potential for combining multiple models to improve the accuracy of image classification tasks in the medical domain. The HyperKvasir dataset is the main dataset used in this study. It contains 4,104 labelled and 99,417 unlabeled images taken at six different locations in the GI tract, including the cecum, ileum, pylorus, rectum, stomach, and Z line. After dataset preprocessing, which includes noise deduction and similarity removal, 871 labelled images remained for the purpose of this study. Our method was more accurate than state-of-the-art studies and had a higher F1 score while categorizing the input images into six different anatomical locations with less than a thousand labelled images. According to the results, feature extraction and ensemble learning increase accuracy by 5%, and a comparison with existing methods using the same dataset indicate improved performance and reduced cross entropy loss. The proposed method can therefore be used in the classification of endoscopy images

    Better prognostic markers for nonmuscle invasive papillary urothelial carcinomas

    Get PDF
    Bladder cancer is a common type of cancer, especially among men in developed countries. Most cancers in the urinary bladder are papillary urothelial carcinomas. They are characterized by a high recurrence frequency (up to 70 %) after local resection. It is crucial for prognosis to discover these recurrent tumours at an early stage, especially before they become muscle-invasive. Reliable prognostic biomarkers for tumour recurrence and stage progression are lacking. This is why patients diagnosed with a non-muscle invasive bladder cancer follow extensive follow-up regimens with possible serious side effects and with high costs for the healthcare systems. WHO grade and tumour stage are two central biomarkers currently having great impact on both treatment decisions and follow-up regimens. However, there are concerns regarding the reproducibility of WHO grading, and stage classification is challenging in small and fragmented tumour material. In Paper I, we examined the reproducibility and the prognostic value of all the individual microscopic features making up the WHO grading system. Among thirteen extracted features there was considerable variation in both reproducibility and prognostic value. The only feature being both reasonably reproducible and statistically significant prognostic was cell polarity. We concluded that further validation studies are needed on these features, and that future grading systems should be based on well-defined features with true prognostic value. With the implementation of immunotherapy, there is increasing interest in tumour immune response and the tumour microenvironment. In a search for better prognostic biomarkers for tumour recurrence and stage progression, in Paper II, we investigated the prognostic value of tumour infiltrating immune cells (CD4, CD8, CD25 and CD138) and previously investigated cell proliferation markers (Ki-67, PPH3 and MAI). Low Ki 67 and tumour multifocality were associated with increased recurrence risk. Recurrence risk was not affected by the composition of immune cells. For stage progression, the only prognostic immune cell marker was CD25. High values for MAI was also strongly associated with stage progression. However, in a multivariate analysis, the most prognostic feature was a combination of MAI and CD25. BCG-instillations in the bladder are indicated in intermediate and high-risk non-muscle invasive bladder cancer patients. This old-fashion immunotherapy has proved to reduce both recurrence- and progression-risk, although it is frequently followed by unpleasant side-effects. As many as 30-50% of high-risk patients receiving BCG instillations, fail by develop high-grade recurrences. They do not only suffer from unnecessary side-effects, but will also have a delay in further treatment. Together with colleagues at three different Dutch hospitals, in Paper III, we looked at the prognostic and predictive value of T1-substaging. A T1-tumour invades the lamina propria, and we wanted to separate those with micro- from those with extensive invasion. We found that BCG-failure was more common among patients with extensive invasion. Furthermore, T1-substaging was associated with both high-grade recurrence-free and progression-free survival. Finally, in Paper IV, we wanted to investigate the prognostic value of two classical immunohistochemical markers, p53 and CK20, and compare them with previously investigated proliferation markers. p53 is a surrogate marker for mutations in the gene TP53, considered to be a main characteristic for muscle-invasive tumours. CK20 is a surrogate marker for luminal tumours in the molecular classification of bladder cancer, and is frequently used to distinguish reactive urothelial changes from urothelial carcinoma in situ. We found both positivity for p53 and CK20 to be significantly associated with stage progression, although not performing better than WHO grade and stage. The proliferation marker MAI, had the highest prognostic value in our study. Any combination of variables did not perform better in a multivariate analysis than MAI alone

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis
    corecore