175 research outputs found

    Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study

    Get PDF
    BACKGROUND: Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE: Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS: The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS: We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS: Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF

    Implementation and Training of Convolutional Neural Networks for the Segmentation of Brain Structures

    Get PDF
    Precise delivery of radiotherapy depends on accurate segmentation of the anatomical structures surrounding the cancer tissue. With increasing knowledge of radio-sensitivity of critical brain structures, more detailed contouring of a range of structures is required. Manual segmentation is time-consuming, and research into methods for auto segmentation has advanced in the past decade. This thesis presents a general-purpose convolutional neural network with the U-net architecture for auto-segmenting the brain, brainstem, Papez Circuit, and right hippocampus. Several different models were trained using T1 MRI, T2 MRI, and CT images to compare the performance of models trained with the different modalities. Low-level preprocessing was done to the images before training, and the Dice score measured model performance. The best performing model for segmentation of the full brain resulted in a Dice score of 0.98, whereas the segmentation of the brainstem achieved a Dice score of 0.73. Furthermore, segmentation of the complex structure Papez Circuit attained Dice score of 0.52, and segmentation of the hippocampus resulted in a Dice score of 0.49. The selected model performed well in segmentation of the full brain and decent for the brainstem compared to similar studies. In contrast, the segmentation results for the hippocampus were slightly lower than previously reported results. No comparison was found for the segmentation results of the Papez Circuit. More preprocessing and patient data is necessary to provide accurate segmentation of the smaller structures. The dataset presented a few problems, and it was discovered that a similar acquisition method for image sequences gives better results. The network architecture provides a solid framework for segmentation.Masteroppgave i medisinsk teknologiMTEK39

    Automated triaging of head MRI examinations using convolutional neural networks

    Get PDF
    The growing demand for head magnetic resonance imaging (MRI) examinations, along with a global shortage of radiologists, has led to an increase in the time taken to report head MRI scans around the world. For many neurological conditions, this delay can result in increased morbidity and mortality. An automated triaging tool could reduce reporting times for abnormal examinations by identifying abnormalities at the time of imaging and prioritizing the reporting of these scans. In this work, we present a convolutional neural network for detecting clinically-relevant abnormalities in T2\text{T}_2-weighted head MRI scans. Using a validated neuroradiology report classifier, we generated a labelled dataset of 43,754 scans from two large UK hospitals for model training, and demonstrate accurate classification (area under the receiver operating curve (AUC) = 0.943) on a test set of 800 scans labelled by a team of neuroradiologists. Importantly, when trained on scans from only a single hospital the model generalized to scans from the other hospital (Δ\DeltaAUC ≤\leq 0.02). A simulation study demonstrated that our model would reduce the mean reporting time for abnormal examinations from 28 days to 14 days and from 9 days to 5 days at the two hospitals, demonstrating feasibility for use in a clinical triage environment.Comment: Accepted as an oral presentation at Medical Imaging with Deep Learning (MIDL) 202

    Bayesian generative learning of brain and spinal cord templates from neuroimaging datasets

    Get PDF
    In the field of neuroimaging, Bayesian modelling techniques have been largely adopted and recognised as powerful tools for the purpose of extracting quantitative anatomical and functional information from medical scans. Nevertheless the potential of Bayesian inference has not yet been fully exploited, as many available tools rely on point estimation techniques, such as maximum likelihood estimation, rather than on full Bayesian inference. The aim of this thesis is to explore the value of approximate learning schemes, for instance variational Bayes, to perform inference from brain and spinal cord MRI data. The applications that will be explored in this work mainly concern image segmentation and atlas construction, with a particular emphasis on the problem of shape and intensity prior learning, from large training data sets of structural MR scans. The resulting computational tools are intended to enable integrated brain and spinal cord morphometric analyses, as opposed to the approach that is most commonly adopted in neuroimaging, which consists in optimising separate tools for brain and spine morphometrics

    SURGICAL NAVIGATION AND AUGMENTED REALITY FOR MARGINS CONTROL IN HEAD AND NECK CANCER

    Get PDF
    I tumori maligni del distretto testa-collo rappresentano un insieme di lesioni dalle diverse caratteristiche patologiche, epidemiologiche e prognostiche. Per una porzione considerevole di tali patologie, l’intervento chirurgico finalizzato all’asportazione completa del tumore rappresenta l’elemento chiave del trattamento, quand’anche esso includa altre modalità quali la radioterapia e la terapia sistemica. La qualità dell’atto chirurgico ablativo è pertanto essenziale al fine di garantire le massime chance di cura al paziente. Nell’ambito della chirurgia oncologica, la qualità delle ablazioni viene misurata attraverso l’analisi dello stato dei margini di resezione. Oltre a rappresentare un surrogato della qualità della resezione chirurgica, lo stato dei margini di resezione ha notevoli implicazioni da un punto di vista clinico e prognostico. Infatti, il coinvolgimento dei margini di resezione da parte della neoplasia rappresenta invariabilmente un fattore prognostico sfavorevole, oltre che implicare la necessità di intensificare i trattamenti postchirurgici (e.g., ponendo indicazione alla chemioradioterapia adiuvante), comportando una maggiore tossicità per il paziente. La proporzione di resezioni con margini positivi (i.e., coinvolti dalla neoplasia) nel distretto testa-collo è tra le più elevate in ambito di chirurgia oncologica. In tale contesto si pone l’obiettivo del dottorato di cui questa tesi riporta i risultati. Le due tecnologie di cui si è analizzata l’utilità in termini di ottimizzazione dello stato dei margini di resezione sono la navigazione chirurgica con rendering tridimensionale e la realtà aumentata basata sulla videoproiezione di immagini. Le sperimentazioni sono state svolte parzialmente presso l’Università degli Studi di Brescia, parzialmente presso l’Azienda Ospedale Università di Padova e parzialmente presso l’University Health Network (Toronto, Ontario, Canada). I risultati delle sperimentazioni incluse in questo elaborato dimostrano che l'impiego della navigazione chirurgica con rendering tridimensionale nel contesto di procedure oncologiche ablative cervico-cefaliche risulta associata ad un vantaggio significativo in termini di riduzione della frequenza di margini positivi. Al contrario, le tecniche di realtà aumentata basata sulla videoproiezione, nell'ambito della sperimentazione preclinica effettuata, non sono risultate associate a vantaggi sufficienti per poter considerare tale tecnologia per la traslazione clinica.Head and neck malignancies are an heterogeneous group of tumors. Surgery represents the mainstay of treatment for the large majority of head and neck cancers, with ablation being aimed at removing completely the tumor. Radiotherapy and systemic therapy have also a substantial role in the multidisciplinary management of head and neck cancers. The quality of surgical ablation is intimately related to margin status evaluated at a microscopic level. Indeed, margin involvement has a remarkably negative effect on prognosis of patients and mandates the escalation of postoperative treatment by adding concomitant chemotherapy to radiotherapy and accordingly increasing the toxicity of overall treatment. The rate of margin involvement in the head and neck is among the highest in the entire field of surgical oncology. In this context, the present PhD project was aimed at testing the utility of 2 technologies, namely surgical navigation with 3-dimensional rendering and pico projector-based augmented reality, in decreasing the rate of involved margins during oncologic surgical ablations in the craniofacial area. Experiments were performed in the University of Brescia, University of Padua, and University Health Network (Toronto, Ontario, Canada). The research activities completed in the context of this PhD course demonstrated that surgical navigation with 3-dimensional rendering confers a higher quality to oncologic ablations in the head and neck, irrespective of the open or endoscopic surgical technique. The benefits deriving from this implementation come with no relevant drawbacks from a logistical and practical standpoint, nor were major adverse events observed. Thus, implementation of this technology into the standard care is the logical proposed step forward. However, the genuine presence of a prognostic advantage needs longer and larger study to be formally addressed. On the other hand, pico projector-based augmented reality showed no sufficient advantages to encourage translation into the clinical setting. Although observing a clear practical advantage deriving from the projection of osteotomy lines onto the surgical field, no substantial benefits were measured when comparing this technology with surgical navigation with 3-dimensional rendering. Yet recognizing a potential value of this technology from an educational standpoint, the performance displayed in the preclinical setting in terms of surgical margins optimization is not in favor of a clinical translation with this specific aim

    Personalized medicine in surgical treatment combining tracking systems, augmented reality and 3D printing

    Get PDF
    Mención Internacional en el título de doctorIn the last twenty years, a new way of practicing medicine has been focusing on the problems and needs of each patient as an individual thanks to the significant advances in healthcare technology, the so-called personalized medicine. In surgical treatments, personalization has been possible thanks to key technologies adapted to the specific anatomy of each patient and the needs of the physicians. Tracking systems, augmented reality (AR), three-dimensional (3D) printing and artificial intelligence (AI) have previously supported this individualized medicine in many ways. However, their independent contributions show several limitations in terms of patient-to-image registration, lack of flexibility to adapt to the requirements of each case, large preoperative planning times, and navigation complexity. The main objective of this thesis is to increase patient personalization in surgical treatments by combining these technologies to bring surgical navigation to new complex cases by developing new patient registration methods, designing patient-specific tools, facilitating access to augmented reality by the medical community, and automating surgical workflows. In the first part of this dissertation, we present a novel framework for acral tumor resection combining intraoperative open-source navigation software, based on an optical tracking system, and desktop 3D printing. We used additive manufacturing to create a patient-specific mold that maintained the same position of the distal extremity during image-guided surgery as in the preoperative images. The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). We achieved an overall accuracy of the system of 1.88 mm evaluated on the patient-specific 3D printed phantoms. Surgical navigation was feasible during both surgeries, allowing surgeons to verify the tumor resection margin. Then, we propose and augmented reality navigation system that uses 3D printed surgical guides with a tracking pattern enabling automatic patient-to-image registration in orthopedic oncology. This specific tool fits on the patient only in a pre-designed location, in this case bone tissue. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing’s sarcoma, and then tested during the actual surgical intervention. The results showed that the surgical guide with the reference marker can be placed precisely with an accuracy of 2 mm and a visualization error lower than 3 mm. The application allowed physicians to visualize the skin, bone, tumor and medical images overlaid on the phantom and patient. To enable the use of AR and 3D printing by inexperienced users without broad technical knowledge, we designed a step-by-step methodology. The proposed protocol describes how to develop an AR smartphone application that allows superimposing any patient-based 3D model onto a real-world environment using a 3D printed marker tracked by the smartphone camera. Our solution brings AR solutions closer to the final clinical user, combining free and open-source software with an open-access protocol. The proposed guide is already helping to accelerate the adoption of these technologies by medical professionals and researchers. In the next section of the thesis, we wanted to show the benefits of combining these technologies during different stages of the surgical workflow in orthopedic oncology. We designed a novel AR-based smartphone application that can display the patient’s anatomy and the tumor’s location. A 3D printed reference marker, designed to fit in a unique position of the affected bone tissue, enables automatic registration. The system has been evaluated in terms of visualization accuracy and usability during the whole surgical workflow on six realistic phantoms achieving a visualization error below 3 mm. The AR system was tested in two clinical cases during surgical planning, patient communication, and surgical intervention. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience In the final section, two surgical navigation systems have been developed and evaluated to guide electrode placement in sacral neurostimulation procedures based on optical tracking and augmented reality. Our results show that both systems could minimize patient discomfort and improve surgical outcomes by reducing needle insertion time and number of punctures. Additionally, we proposed a feasible clinical workflow for guiding SNS interventions with both navigation methodologies, including automatically creating sacral virtual 3D models for trajectory definition using artificial intelligence and intraoperative patient-to-image registration. To conclude, in this thesis we have demonstrated that the combination of technologies such as tracking systems, augmented reality, 3D printing, and artificial intelligence overcomes many current limitations in surgical treatments. Our results encourage the medical community to combine these technologies to improve surgical workflows and outcomes in more clinical scenarios.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretaria: María Arrate Muñoz Barrutia.- Vocal: Csaba Pinte
    • …
    corecore