57 research outputs found

    Transferability of Deep Learning Algorithms for Malignancy Detection in Confocal Laser Endomicroscopy Images from Different Anatomical Locations of the Upper Gastrointestinal Tract

    Full text link
    Squamous Cell Carcinoma (SCC) is the most common cancer type of the epithelium and is often detected at a late stage. Besides invasive diagnosis of SCC by means of biopsy and histo-pathologic assessment, Confocal Laser Endomicroscopy (CLE) has emerged as noninvasive method that was successfully used to diagnose SCC in vivo. For interpretation of CLE images, however, extensive training is required, which limits its applicability and use in clinical practice of the method. To aid diagnosis of SCC in a broader scope, automatic detection methods have been proposed. This work compares two methods with regard to their applicability in a transfer learning sense, i.e. training on one tissue type (from one clinical team) and applying the learnt classification system to another entity (different anatomy, different clinical team). Besides a previously proposed, patch-based method based on convolutional neural networks, a novel classification method on image level (based on a pre-trained Inception V.3 network with dedicated preprocessing and interpretation of class activation maps) is proposed and evaluated. The newly presented approach improves recognition performance, yielding accuracies of 91.63% on the first data set (oral cavity) and 92.63% on a joint data set. The generalization from oral cavity to the second data set (vocal folds) lead to similar area-under-the-ROC curve values than a direct training on the vocal folds data set, indicating good generalization.Comment: Erratum for version 1, correcting the number of CLE image sequences used in one data se

    Development of deep learning methods for head and neck cancer detection in hyperspectral imaging and digital pathology for surgical guidance

    Get PDF
    Surgeons performing routine cancer resections utilize palpation and visual inspection, along with time-consuming microscopic tissue analysis, to ensure removal of cancer. Despite this, inadequate surgical cancer margins are reported for up to 10-20% of head and neck squamous cell carcinoma (SCC) operations. There exists a need for surgical guidance with optical imaging to ensure complete cancer resection in the operating room. The objective of this dissertation is to evaluate hyperspectral imaging (HSI) as a non-contact, label-free optical imaging modality to provide intraoperative diagnostic information. For comparison of different optical methods, autofluorescence, RGB composite images synthesized from HSI, and two fluorescent dyes are also acquired and investigated for head and neck cancer detection. A novel and comprehensive dataset was obtained of 585 excised tissue specimens from 204 patients undergoing routine head and neck cancer surgeries. The first aim was to use SCC tissue specimens to determine the potential of HSI for surgical guidance in the challenging task of head and neck SCC detection. It is hypothesized that HSI could reduce time and provide quantitative cancer predictions. State-of-the-art deep learning algorithms were developed for SCC detection in 102 patients and compared to other optical methods. HSI detected SCC with a median AUC score of 85%, and several anatomical locations demonstrated good SCC detection, such as the larynx, oropharynx, hypopharynx, and nasal cavity. To understand the ability of HSI for SCC detection, the most important spectral features were calculated and correlated with known cancer physiology signals, notably oxygenated and deoxygenated hemoglobin. The second aim was to evaluate HSI for tumor detection in thyroid and salivary glands, and RGB images were synthesized using the spectral response curves of the human eye for comparison. Using deep learning, HSI detected thyroid tumors with 86% average AUC score, which outperformed fluorescent dyes and autofluorescence, but HSI-synthesized RGB imagery performed with 90% AUC score. The last aim was to develop deep learning algorithms for head and neck cancer detection in hundreds of digitized histology slides. Slides containing SCC or thyroid carcinoma can be distinguished from normal slides with 94% and 99% AUC scores, respectively, and SCC and thyroid carcinoma can be localized within whole-slide images with 92% and 95% AUC scores, respectively. In conclusion, the outcomes of this thesis work demonstrate that HSI and deep learning methods could aid surgeons and pathologists in detecting head and neck cancers.Ph.D

    Endoscopy

    Get PDF
    Endoscopy is a fast moving field, and new techniques are continuously emerging. In recent decades, endoscopy has evolved and branched out from a diagnostic modality to enhanced video and computer assisting imaging with impressive interventional capabilities. The modern endoscopy has seen advances not only in types of endoscopes available, but also in types of interventions amenable to the endoscopic approach. To date, there are a lot more developments that are being trialed. Modern endoscopic equipment provides physicians with the benefit of many technical advances. Endoscopy is an effective and safe procedure even in special populations including pediatric patients and renal transplant patients. It serves as the tool for diagnosis and therapeutic interventions of many organs including gastrointestinal tract, head and neck, urinary tract and others

    Cognitive Supervision for Robot-Assisted Minimally Invasive Laser Surgery

    Get PDF
    Biomedical Engineering, Robotics and Automation, User Interfaces and Human Computer Interaction, Minimally Invasive Surger

    Evaluering av maskinlæringsmetoder for automatisk tumorsegmentering

    Get PDF
    The definition of target volumes and organs at risk (OARs) is a critical part of radiotherapy planning. In routine practice, this is typically done manually by clinical experts who contour the structures in medical images prior to dosimetric planning. This is a time-consuming and labor-intensive task. Moreover, manual contouring is inherently a subjective task and substantial contour variability can occur, potentially impacting on radiotherapy treatment and image-derived biomarkers. Automatic segmentation (auto-segmentation) of target volumes and OARs has the potential to save time and resources while reducing contouring variability. Recently, auto-segmentation of OARs using machine learning methods has been integrated into the clinical workflow by several institutions and such tools have been made commercially available by major vendors. The use of machine learning methods for auto-segmentation of target volumes including the gross tumor volume (GTV) is less mature at present but is the focus of extensive ongoing research. The primary aim of this thesis was to investigate the use of machine learning methods for auto-segmentation of the GTV in medical images. Manual GTV contours constituted the ground truth in the analyses. Volumetric overlap and distance-based metrics were used to quantify auto-segmentation performance. Four different image datasets were evaluated. The first dataset, analyzed in papers I–II, consisted of positron emission tomography (PET) and contrast-enhanced computed tomography (ceCT) images of 197 patients with head and neck cancer (HNC). The ceCT images of this dataset were also included in paper IV. Two datasets were analyzed separately in paper III, namely (i) PET, ceCT, and low-dose CT (ldCT) images of 86 patients with anal cancer (AC), and (ii) PET, ceCT, ldCT, and T2 and diffusion-weighted (T2W and DW, respectively) MR images of a subset (n = 36) of the aforementioned AC patients. The last dataset consisted of ceCT images of 36 canine patients with HNC and was analyzed in paper IV. In paper I, three approaches to auto-segmentation of the GTV in patients with HNC were evaluated and compared, namely conventional PET thresholding, classical machine learning algorithms, and deep learning using a 2-dimensional (2D) U-Net convolutional neural network (CNN). For the latter two approaches the effect of imaging modality on auto-segmentation performance was also assessed. Deep learning based on multimodality PET/ceCT image input resulted in superior agreement with the manual ground truth contours, as quantified by geometric overlap and distance-based performance evaluation metrics calculated on a per patient basis. Moreover, only deep learning provided adequate performance for segmentation based solely on ceCT images. For segmentation based on PET-only, all three approaches provided adequate segmentation performance, though deep learning ranked first, followed by classical machine learning, and PET thresholding. In paper II, deep learning-based auto-segmentation of the GTV in patients with HNC using a 2D U-Net architecture was evaluated more thoroughly by introducing new structure-based performance evaluation metrics and including qualitative expert evaluation of the resulting auto-segmentation quality. As in paper I, multimodal PET/ceCT image input provided superior segmentation performance, compared to the single modality CNN models. The structure-based metrics showed quantitatively that the PET signal was vital for the sensitivity of the CNN models, as the superior PET/ceCT-based model identified 86 % of all malignant GTV structures whereas the ceCT-based model only identified 53 % of these structures. Furthermore, the majority of the qualitatively evaluated auto-segmentations (~ 90 %) generated by the best PET/ceCT-based CNN were given a quality score corresponding to substantial clinical value. Based on papers I and II, deep learning with multimodality PET/ceCT image input would be the recommended approach for auto-segmentation of the GTV in human patients with HNC. In paper III, deep learning-based auto-segmentation of the GTV in patients with AC was evaluated for the first time, using a 2D U-Net architecture. Furthermore, an extensive comparison of the impact of different single modality and multimodality combinations of PET, ceCT, ldCT, T2W, and/or DW image input on quantitative auto-segmentation performance was conducted. For both the 86-patient and 36-patient datasets, the models based on PET/ceCT provided the highest mean overlap with the manual ground truth contours. For this task, however, comparable auto-segmentation quality was obtained for solely ceCT-based CNN models. The CNN model based solely on T2W images also obtained acceptable auto-segmentation performance and was ranked as the second-best single modality model for the 36-patient dataset. These results indicate that deep learning could prove a versatile future tool for auto-segmentation of the GTV in patients with AC. Paper IV investigated for the first time the applicability of deep learning-based auto-segmentation of the GTV in canine patients with HNC, using a 3-dimensional (3D) U-Net architecture and ceCT image input. A transfer learning approach where CNN models were pre-trained on the human HNC data and subsequently fine-tuned on canine data was compared to training models from scratch on canine data. These two approaches resulted in similar auto-segmentation performances, which on average was comparable to the overlap metrics obtained for ceCT-based auto-segmentation in human HNC patients. Auto-segmentation in canine HNC patients appeared particularly promising for nasal cavity tumors, as the average overlap with manual contours was 25 % higher for this subgroup, compared to the average for all included tumor sites. In conclusion, deep learning with CNNs provided high-quality GTV autosegmentations for all datasets included in this thesis. In all cases, the best-performing deep learning models resulted in an average overlap with manual contours which was comparable to the reported interobserver agreements between human experts performing manual GTV contouring for the given cancer type and imaging modality. Based on these findings, further investigation of deep learning-based auto-segmentation of the GTV in the given diagnoses would be highly warranted.Definisjon av målvolum og risikoorganer er en kritisk del av planleggingen av strålebehandling. I praksis gjøres dette vanligvis manuelt av kliniske eksperter som tegner inn strukturenes konturer i medisinske bilder før dosimetrisk planlegging. Dette er en tids- og arbeidskrevende oppgave. Manuell inntegning er også subjektiv, og betydelig variasjon i inntegnede konturer kan forekomme. Slik variasjon kan potensielt påvirke strålebehandlingen og bildebaserte biomarkører. Automatisk segmentering (auto-segmentering) av målvolum og risikoorganer kan potensielt spare tid og ressurser samtidig som konturvariasjonen reduseres. Autosegmentering av risikoorganer ved hjelp av maskinlæringsmetoder har nylig blitt implementert som del av den kliniske arbeidsflyten ved flere helseinstitusjoner, og slike verktøy er kommersielt tilgjengelige hos store leverandører av medisinsk teknologi. Auto-segmentering av målvolum inkludert tumorvolumet gross tumor volume (GTV) ved hjelp av maskinlæringsmetoder er per i dag mindre teknologisk modent, men dette området er fokus for omfattende pågående forskning. Hovedmålet med denne avhandlingen var å undersøke bruken av maskinlæringsmetoder for auto-segmentering av GTV i medisinske bilder. Manuelle GTVinntegninger utgjorde grunnsannheten (the ground truth) i analysene. Mål på volumetrisk overlapp og avstand mellom sanne og predikerte konturer ble brukt til å kvantifisere kvaliteten til de automatisk genererte GTV-konturene. Fire forskjellige bildedatasett ble evaluert. Det første datasettet, analysert i artikkel I–II, bestod av positronemisjonstomografi (PET) og kontrastforsterkede computertomografi (ceCT) bilder av 197 pasienter med hode/halskreft. ceCT-bildene i dette datasettet ble også inkludert i artikkel IV. To datasett ble analysert separat i artikkel III, nemlig (i) PET, ceCT og lavdose CT (ldCT) bilder av 86 pasienter med analkreft, og (ii) PET, ceCT, ldCT og T2- og diffusjonsvektet (henholdsvis T2W og DW) MR-bilder av en undergruppe (n = 36) av de ovennevnte analkreftpasientene. Det siste datasettet, som bestod av ceCT-bilder av 36 hunder med hode/halskreft, ble analysert i artikkel IV

    Multimodal optical systems for clinical oncology

    Get PDF
    This thesis presents three multimodal optical (light-based) systems designed to improve the capabilities of existing optical modalities for cancer diagnostics and theranostics. Optical diagnostic and therapeutic modalities have seen tremendous success in improving the detection, monitoring, and treatment of cancer. For example, optical spectroscopies can accurately distinguish between healthy and diseased tissues, fluorescence imaging can light up tumours for surgical guidance, and laser systems can treat many epithelial cancers. However, despite these advances, prognoses for many cancers remain poor, positive margin rates following resection remain high, and visual inspection and palpation remain crucial for tumour detection. The synergistic combination of multiple optical modalities, as presented here, offers a promising solution. The first multimodal optical system (Chapter 3) combines Raman spectroscopic diagnostics with photodynamic therapy using a custom-built multimodal optical probe. Crucially, this system demonstrates the feasibility of nanoparticle-free theranostics, which could simplify the clinical translation of cancer theranostic systems without sacrificing diagnostic or therapeutic benefit. The second system (Chapter 4) applies computer vision to Raman spectroscopic diagnostics to achieve spatial spectroscopic diagnostics. It provides an augmented reality display of the surgical field-of-view, overlaying spatially co-registered spectroscopic diagnoses onto imaging data. This enables the translation of Raman spectroscopy from a 1D technique to a 2D diagnostic modality and overcomes the trade-off between diagnostic accuracy and field-of-view that has limited optical systems to date. The final system (Chapter 5) integrates fluorescence imaging and Raman spectroscopy for fluorescence-guided spatial spectroscopic diagnostics. This facilitates macroscopic tumour identification to guide accurate spectroscopic margin delineation, enabling the spectroscopic examination of suspicious lesions across large tissue areas. Together, these multimodal optical systems demonstrate that the integration of multiple optical modalities has potential to improve patient outcomes through enhanced tumour detection and precision-targeted therapies.Open Acces

    Vesicant burns: Is laser debridement ('lasablation') a viable method to accelerate wound healing?

    Get PDF
    The vesicant agents, Lewisite and sulphur mustard were initially deployed as chemical weapons in the First World War at Ypres in 1917. These agents cause immediate blistering followed by deep dermal to full thickness burns and can be fatal in 1-3% (NATO 1996). Dermabrasion can accelerate the healing of these types of burn injuries without the need for additional split skin grafting (Rice 1997 and Rice, Brown & Lam et al 1999). Current laser technology allows accurate laser debridement of burn wounds. The corollary of this is that it should be feasible to partially laser debride a vesicant burn to accelerate healing. This treatment should be more precise, encouraging rapid healing compared to dermabrasion. This thesis investigated the application of modem C02 and Erbium:YAG lasers for debriding Lewisite burns on a representative model. A large white pig model (n=6) was used to investigate the effectiveness of C02 and Erbium:YAG lasers in ablation of established Lewisite burns. Burns underwent treatment at four days post-exposure and were histologically assessed at one, two and three weeks thereafter for the rate of epithelial healing. The re-epithelialisation rates in the laser debrided groups were accelerated by a factor of four compared to untreated, historical controls by the first week. (Anova p = 0.006 for pulsed C02 and p = 0.012 for Erbium: YAG). Ablation of the burn eschar was thought to accelerate the rate of healing by causing partial debridement. This method has been termed 'lasablation' A concurrent study of C02 and Erbium: YAG laser debridement of normal skin resulted in a superficial resurfacing burn which healed within a week, regardless of the type of laser. The burn depth was in the order of 150 urn. The superficial nature of these burns allowed for rapid healing, consistent with previous findings. All work was carried out in accordance with the Animals (Scientific Procedures) Act 1986 with Home Office approval

    The rotterdam study: 2014 objectives and design update

    Get PDF
    The Rotterdam Study is a prospective cohort study ongoing since 1990 in the city of Rotterdam in The Netherlands. The study targets cardiovascular, endocrine, hepatic, neurological, ophthalmic, psychiatric, dermatological, oncological, and respiratory diseases. As of 2008, 14,926 subjects aged 45 years or over comprise the Rotterdam Study cohort. The findings of the Rotterdam Study have been presented in over a 1,000 research articles and reports (see www.erasmus-epidemiology.nl/rotterdamstudy). This article gives the rationale of the study and its design. It also presents a summary of the major findings and an update of the objectives and methods
    • …
    corecore