170 research outputs found

    Modelling the head and neck region for microwave imaging of cervical lymph nodes

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Radiações em Diagnóstico e Terapia), Universidade de Lisboa, Faculdade de Ciências, 2020O termo “cancro da cabeça e pescoço” refere-se a um qualquer tipo de cancro com início nas células epiteliais das cavidades oral e nasal, seios perinasais, glândulas salivares, faringe e laringe. Estes tumores malignos apresentaram, em 2018, uma incidência mundial de cerca de 887.659 novos casos e taxa de mortalidade superior a 51%. Aproximadamente 80% dos novos casos diagnosticados nesse ano revelaram a proliferação de células cancerígenas dos tumores para outras regiões do corpo através dos vasos sanguíneos e linfáticos das redondezas. De forma a determinar o estado de desenvolvimento do cancro e as terapias a serem seguidas, é fundamental a avaliação dos primeiros gânglios linfáticos que recebem a drenagem do tumor primário – os gânglios sentinela – e que, por isso, apresentam maior probabilidade de se tornarem os primeiros alvos das células tumorais. Gânglios sentinela saudáveis implicam uma menor probabilidade de surgirem metástases, isto é, novos focos tumorais decorrentes da disseminação do cancro para outros órgãos. O procedimento standard que permite o diagnóstico dos gânglios linfáticos cervicais, gânglios que se encontram na região da cabeça e pescoço, e o estadiamento do cancro consiste na remoção cirúrgica destes gânglios e subsequente histopatologia. Para além de ser um procedimento invasivo, a excisão cirúrgica dos gânglios linfáticos representa perigos tanto para a saúde mental e física dos pacientes, como para a sua qualidade de vida. Dores, aparência física deformada (devido a cicatrizes), perda da fala ou da capacidade de deglutição são algumas das repercussões que poderão advir da remoção de gânglios linfáticos da região da cabeça e pescoço. Adicionalmente, o risco de infeção e linfedema – acumulação de linfa nos tecidos intersticiais – aumenta significativamente com a remoção de uma grande quantidade de gânglios linfáticos saudáveis. Também os encargos para os sistemas de saúde são elevados devido à necessidade de monitorização destes pacientes e subsequentes terapias e cuidados associados à morbilidade, como é o caso da drenagem linfática manual e da fisioterapia. O desenvolvimento de novas tecnologias de imagem da cabeça e pescoço requer o uso de modelos realistas que simulem o comportamento e propriedades dos tecidos biológicos. A imagem médica por micro-ondas é uma técnica promissora e não invasiva que utiliza radiação não ionizante, isto é, sinais com frequências na gama das micro-ondas cujo comportamento depende do contraste dielétrico entre os diferentes tecidos atravessados, pelo que é possível identificar regiões ou estruturas de interesse e, consequentemente, complementar o diagnóstico. No entanto, devido às suas características, este tipo de modalidade apenas poderá ser utilizado para a avaliação de regiões anatómicas pouco profundas. Estudos indicam que os gânglios linfáticos com células tumorais possuem propriedades dielétricas distintas dos gânglios linfáticos saudáveis. Por esta razão e juntamente pelo facto da sua localização pouco profunda, consideramos que os gânglios linfáticos da região da cabeça e pescoço constituem um excelente candidato para a utilização de imagem médica por radar na frequência das micro-ondas como ferramenta de diagnóstico. Até à data, não foram efetuados estudos de desenvolvimento de modelos da região da cabeça e pescoço focados em representar realisticamente os gânglios linfáticos cervicais. Por este motivo, este projeto consistiu no desenvolvimento de dois geradores de fantomas tridimensionais da região da cabeça e pescoço – um gerador de fantomas numéricos simples (gerador I) e um gerador de fantomas numéricos mais complexos e anatomicamente realistas, que foi derivado de imagens de ressonância magnética e que inclui as propriedades dielétricas realistas dos tecidos biológicos (gerador II). Ambos os geradores permitem obter fantomas com diferentes níveis de complexidade e assim acompanhar diferentes fases no processo de desenvolvimento de equipamentos médicos de imagiologia por micro-ondas. Todos os fantomas gerados, e principalmente os fantomas anatomicamente realistas, poderão ser mais tarde impressos a três dimensões. O processo de construção do gerador I compreendeu a modelação da região da cabeça e pescoço em concordância com a anatomia humana e distribuição dos principais tecidos, e a criação de uma interface para a personalização dos modelos (por exemplo, a inclusão ou remoção de alguns tecidos é dependente do propósito para o qual cada modelo é gerado). O estudo minucioso desta região levou à inclusão de tecidos ósseos, musculares e adiposos, pele e gânglios linfáticos nos modelos. Apesar destes fantomas serem bastante simples, são essenciais para o início do processo de desenvolvimento de dispositivos de imagem médica por micro-ondas dedicados ao diagnóstico dos gânglios linfáticos cervicais. O processo de construção do gerador II foi fracionado em 3 grandes etapas devido ao seu elevado grau de complexidade. A primeira etapa consistiu na criação de uma pipeline que permitiu o processamento das imagens de ressonância magnética. Esta pipeline incluiu: a normalização dos dados, a subtração do background com recurso a máscaras binárias manualmente construídas, o tratamento das imagens através do uso de filtros lineares (como por exemplo, filtros passa-baixo ideal, Gaussiano e Butterworth) e não-lineares (por exemplo, o filtro mediana), e o uso de algoritmos não supervisionados de machine learning para a segmentação dos vários tecidos biológicos presentes na região cervical, tais como o K-means, Agglomerative Hierarchical Clustering, DBSCAN e BIRCH. Visto que cada algoritmo não supervisionado de machine learning anteriormente referido requer diferentes hiperparâmetros, é necessário proceder a um estudo pormenorizado que permita a compreensão do modo de funcionamento de cada algoritmo individualmente e a sua interação / performance com o tipo de dados tratados neste projeto (isto é, dados de exames de ressonâncias magnéticas) com vista a escolher empiricamente o leque de valores de cada hiperparâmetro que deve ser considerado, e ainda as combinações que devem ser testadas. Após esta fase, segue-se a avaliação da combinação de hiperparâmetros que resulta na melhor segmentação das estruturas anatómicas. Para esta avaliação são consideradas duas metodologias que foram combinadas: a utilização de métricas que permitam avaliar a qualidade do clustering (como por exemplo, o Silhoeutte Coefficient, o índice de Davies-Bouldin e o índice de Calinski-Harabasz) e ainda a inspeção visual. A segunda etapa foi dedicada à introdução manual de algumas estruturas, como a pele e os gânglios linfáticos, que não foram segmentadas pelos algoritmos de machine learning devido à sua fina espessura e pequena dimensão, respetivamente. Finalmente, a última etapa consistiu na atribuição das propriedades dielétricas, para uma frequência pré-definida, aos tecidos biológicos através do Modelo de Cole-Cole de quatro pólos. Tal como no gerador I, foi criada uma interface que permitiu ao utilizador decidir que características pretende incluir no fantoma, tais como: os tecidos a incluir (tecido adiposo, tecido muscular, pele e / ou gânglios linfáticos), relativamente aos gânglios linfáticos o utilizador poderá ainda determinar o seu número, dimensões, localização em níveis e estado clínico (saudável ou metastizado) e finalmente, o valor de frequência para o qual pretende obter as propriedades dielétricas (permitividade relativa e condutividade) de cada tecido biológico. Este projeto resultou no desenvolvimento de um gerador de modelos realistas da região da cabeça e pescoço com foco nos gânglios linfáticos cervicais, que permite a inserção de tecidos biológicos, tais como o tecidos muscular e adiposo, pele e gânglios linfáticos e aos quais atribui as propriedades dielétricas para uma determinada frequência na gama de micro-ondas. Estes modelos computacionais resultantes do gerador II, e que poderão ser mais tarde impressos em 3D, podem vir a ter grande impacto no processo de desenvolvimento de dispositivos médicos de imagem por micro-ondas que visam diagnosticar gânglios linfáticos cervicais, e consequentemente, contribuir para um processo não invasivo de estadiamento do cancro da cabeça e pescoço.Head and neck cancer is a broad term referring to any epithelial malignancies arising in the paranasal sinuses, nasal and oral cavities, salivary glands, pharynx, and larynx. In 2018, approximately 80% of the newly diagnosed head and neck cancer cases resulted in tumour cells spreading to neighbouring lymph and blood vessels. In order to determine cancer staging and decide which follow-up exams and therapy to follow, physicians excise and assess the Lymph Nodes (LNs) closest to the primary site of the head and neck tumour – the sentinel nodes – which are the ones with highest probability of being targeted by cancer cells. The standard procedure to diagnose the Cervical Lymph Nodes (CLNs), i.e. lymph nodes within the head and neck region, and determine the cancer staging frequently involves their surgical removal and subsequent histopathology. Besides being invasive, the removal of the lymph nodes also has negative impact on patients’ quality of life, it can be health threatening, and it is costly to healthcare systems due to the patients’ needs for follow-up treatments/cares. Anatomically realistic phantoms are required to develop novel technologies tailored to image head and neck regions. Medical MicroWave Imaging (MWI) is a promising non-invasive approach which uses non-ionizing radiation to screen shallow body regions, therefore cervical lymph nodes are excellent candidates to this imaging modality. In this project, a three-dimensional (3D) numerical phantom generator (generator I) and a Magnetic Resonance Imaging (MRI)-derived anthropomorphic phantom generator (generator II) of the head and neck region were developed to create phantoms with different levels of complexity and realism, which can be later 3D printed to test medical MWI devices. The process of designing the numerical phantom generator included the modelling of the head and neck regions according to their anatomy and the distribution of their main tissues, and the creation of an interface which allowed the users to personalise the model (e.g. include or remove certain tissues, depending on the purpose of each generated model). To build the anthropomorphic phantom generator, the modelling process included the creation of a pipeline of data processing steps to be applied to MRIs of the head and neck, followed by the development of algorithms to introduce additional tissues to the models, such as skin and lymph nodes, and finally, the assignment of the dielectric properties to the biological tissues. Similarly, this generator allowed users to decide the features they wish to include in the phantoms. This project resulted in the creation of a generator of 3D anatomically realistic head and neck phantoms which allows the inclusion of biological tissues such as skin, muscle tissue, adipose tissue, and LNs, and assigns state-of-the-art dielectric properties to the tissues. These phantoms may have a great impact in the development process of MWI devices aimed at screening and diagnosing CLNs, and consequently, contribute to a non-invasive staging of the head and neck cancer

    Improving deep neural network training with batch size and learning rate optimization for head and neck tumor segmentation on 2D and 3D medical images

    Get PDF
    Medical imaging is a key tool used in healthcare to diagnose and prognose patients by aiding the detection of a variety of diseases and conditions. In practice, medical image screening must be performed by clinical practitioners who rely primarily on their expertise and experience for disease diagnosis. The ability of convolutional neural networks (CNNs) to extract hierarchical features and determine classifications directly from raw image data makes CNNs a potentially useful adjunct to the medical image analysis process. A common challenge in successfully implementing CNNs is optimizing hyperparameters for training. In this study, we propose a method which utilizes scheduled hyperparameters and Bayesian optimization to classify cancerous and noncancerous tissues (i.e., segmentation) from head and neck computed tomography (CT) and positron emission tomography (PET) scans. The results of this method are compared using CT imaging with and without PET imaging for 2D and 3D image segmentation models

    Evaluering av maskinlæringsmetoder for automatisk tumorsegmentering

    Get PDF
    The definition of target volumes and organs at risk (OARs) is a critical part of radiotherapy planning. In routine practice, this is typically done manually by clinical experts who contour the structures in medical images prior to dosimetric planning. This is a time-consuming and labor-intensive task. Moreover, manual contouring is inherently a subjective task and substantial contour variability can occur, potentially impacting on radiotherapy treatment and image-derived biomarkers. Automatic segmentation (auto-segmentation) of target volumes and OARs has the potential to save time and resources while reducing contouring variability. Recently, auto-segmentation of OARs using machine learning methods has been integrated into the clinical workflow by several institutions and such tools have been made commercially available by major vendors. The use of machine learning methods for auto-segmentation of target volumes including the gross tumor volume (GTV) is less mature at present but is the focus of extensive ongoing research. The primary aim of this thesis was to investigate the use of machine learning methods for auto-segmentation of the GTV in medical images. Manual GTV contours constituted the ground truth in the analyses. Volumetric overlap and distance-based metrics were used to quantify auto-segmentation performance. Four different image datasets were evaluated. The first dataset, analyzed in papers I–II, consisted of positron emission tomography (PET) and contrast-enhanced computed tomography (ceCT) images of 197 patients with head and neck cancer (HNC). The ceCT images of this dataset were also included in paper IV. Two datasets were analyzed separately in paper III, namely (i) PET, ceCT, and low-dose CT (ldCT) images of 86 patients with anal cancer (AC), and (ii) PET, ceCT, ldCT, and T2 and diffusion-weighted (T2W and DW, respectively) MR images of a subset (n = 36) of the aforementioned AC patients. The last dataset consisted of ceCT images of 36 canine patients with HNC and was analyzed in paper IV. In paper I, three approaches to auto-segmentation of the GTV in patients with HNC were evaluated and compared, namely conventional PET thresholding, classical machine learning algorithms, and deep learning using a 2-dimensional (2D) U-Net convolutional neural network (CNN). For the latter two approaches the effect of imaging modality on auto-segmentation performance was also assessed. Deep learning based on multimodality PET/ceCT image input resulted in superior agreement with the manual ground truth contours, as quantified by geometric overlap and distance-based performance evaluation metrics calculated on a per patient basis. Moreover, only deep learning provided adequate performance for segmentation based solely on ceCT images. For segmentation based on PET-only, all three approaches provided adequate segmentation performance, though deep learning ranked first, followed by classical machine learning, and PET thresholding. In paper II, deep learning-based auto-segmentation of the GTV in patients with HNC using a 2D U-Net architecture was evaluated more thoroughly by introducing new structure-based performance evaluation metrics and including qualitative expert evaluation of the resulting auto-segmentation quality. As in paper I, multimodal PET/ceCT image input provided superior segmentation performance, compared to the single modality CNN models. The structure-based metrics showed quantitatively that the PET signal was vital for the sensitivity of the CNN models, as the superior PET/ceCT-based model identified 86 % of all malignant GTV structures whereas the ceCT-based model only identified 53 % of these structures. Furthermore, the majority of the qualitatively evaluated auto-segmentations (~ 90 %) generated by the best PET/ceCT-based CNN were given a quality score corresponding to substantial clinical value. Based on papers I and II, deep learning with multimodality PET/ceCT image input would be the recommended approach for auto-segmentation of the GTV in human patients with HNC. In paper III, deep learning-based auto-segmentation of the GTV in patients with AC was evaluated for the first time, using a 2D U-Net architecture. Furthermore, an extensive comparison of the impact of different single modality and multimodality combinations of PET, ceCT, ldCT, T2W, and/or DW image input on quantitative auto-segmentation performance was conducted. For both the 86-patient and 36-patient datasets, the models based on PET/ceCT provided the highest mean overlap with the manual ground truth contours. For this task, however, comparable auto-segmentation quality was obtained for solely ceCT-based CNN models. The CNN model based solely on T2W images also obtained acceptable auto-segmentation performance and was ranked as the second-best single modality model for the 36-patient dataset. These results indicate that deep learning could prove a versatile future tool for auto-segmentation of the GTV in patients with AC. Paper IV investigated for the first time the applicability of deep learning-based auto-segmentation of the GTV in canine patients with HNC, using a 3-dimensional (3D) U-Net architecture and ceCT image input. A transfer learning approach where CNN models were pre-trained on the human HNC data and subsequently fine-tuned on canine data was compared to training models from scratch on canine data. These two approaches resulted in similar auto-segmentation performances, which on average was comparable to the overlap metrics obtained for ceCT-based auto-segmentation in human HNC patients. Auto-segmentation in canine HNC patients appeared particularly promising for nasal cavity tumors, as the average overlap with manual contours was 25 % higher for this subgroup, compared to the average for all included tumor sites. In conclusion, deep learning with CNNs provided high-quality GTV autosegmentations for all datasets included in this thesis. In all cases, the best-performing deep learning models resulted in an average overlap with manual contours which was comparable to the reported interobserver agreements between human experts performing manual GTV contouring for the given cancer type and imaging modality. Based on these findings, further investigation of deep learning-based auto-segmentation of the GTV in the given diagnoses would be highly warranted.Definisjon av målvolum og risikoorganer er en kritisk del av planleggingen av strålebehandling. I praksis gjøres dette vanligvis manuelt av kliniske eksperter som tegner inn strukturenes konturer i medisinske bilder før dosimetrisk planlegging. Dette er en tids- og arbeidskrevende oppgave. Manuell inntegning er også subjektiv, og betydelig variasjon i inntegnede konturer kan forekomme. Slik variasjon kan potensielt påvirke strålebehandlingen og bildebaserte biomarkører. Automatisk segmentering (auto-segmentering) av målvolum og risikoorganer kan potensielt spare tid og ressurser samtidig som konturvariasjonen reduseres. Autosegmentering av risikoorganer ved hjelp av maskinlæringsmetoder har nylig blitt implementert som del av den kliniske arbeidsflyten ved flere helseinstitusjoner, og slike verktøy er kommersielt tilgjengelige hos store leverandører av medisinsk teknologi. Auto-segmentering av målvolum inkludert tumorvolumet gross tumor volume (GTV) ved hjelp av maskinlæringsmetoder er per i dag mindre teknologisk modent, men dette området er fokus for omfattende pågående forskning. Hovedmålet med denne avhandlingen var å undersøke bruken av maskinlæringsmetoder for auto-segmentering av GTV i medisinske bilder. Manuelle GTVinntegninger utgjorde grunnsannheten (the ground truth) i analysene. Mål på volumetrisk overlapp og avstand mellom sanne og predikerte konturer ble brukt til å kvantifisere kvaliteten til de automatisk genererte GTV-konturene. Fire forskjellige bildedatasett ble evaluert. Det første datasettet, analysert i artikkel I–II, bestod av positronemisjonstomografi (PET) og kontrastforsterkede computertomografi (ceCT) bilder av 197 pasienter med hode/halskreft. ceCT-bildene i dette datasettet ble også inkludert i artikkel IV. To datasett ble analysert separat i artikkel III, nemlig (i) PET, ceCT og lavdose CT (ldCT) bilder av 86 pasienter med analkreft, og (ii) PET, ceCT, ldCT og T2- og diffusjonsvektet (henholdsvis T2W og DW) MR-bilder av en undergruppe (n = 36) av de ovennevnte analkreftpasientene. Det siste datasettet, som bestod av ceCT-bilder av 36 hunder med hode/halskreft, ble analysert i artikkel IV

    Multiparametric Magnetic Resonance Imaging Artificial Intelligence Pipeline for Oropharyngeal Cancer Radiotherapy Treatment Guidance

    Get PDF
    Oropharyngeal cancer (OPC) is a widespread disease and one of the few domestic cancers that is rising in incidence. Radiographic images are crucial for assessment of OPC and aid in radiotherapy (RT) treatment. However, RT planning with conventional imaging approaches requires operator-dependent tumor segmentation, which is the primary source of treatment error. Further, OPC expresses differential tumor/node mid-RT response (rapid response) rates, resulting in significant differences between planned and delivered RT dose. Finally, clinical outcomes for OPC patients can also be variable, which warrants the investigation of prognostic models. Multiparametric MRI (mpMRI) techniques that incorporate simultaneous anatomical and functional information coupled to artificial intelligence (AI) approaches could improve clinical decision support for OPC by providing immediately actionable clinical rationale for adaptive RT planning. If tumors could be reproducibly segmented, rapid response could be classified, and prognosis could be reliably determined, overall patient outcomes would be optimized to improve the therapeutic index as a function of more risk-adapted RT volumes. Consequently, there is an unmet need for automated and reproducible imaging which can simultaneously segment tumors and provide predictive value for actionable RT adaptation. This dissertation primarily seeks to explore and optimize image processing, tumor segmentation, and patient outcomes in OPC through a combination of advanced imaging techniques and AI algorithms. In the first specific aim of this dissertation, we develop and evaluate mpMRI pre-processing techniques for use in downstream segmentation, response prediction, and outcome prediction pipelines. Various MRI intensity standardization and registration approaches were systematically compared and benchmarked. Moreover, synthetic image algorithms were developed to decrease MRI scan time in an effort to optimize our AI pipelines. We demonstrated that proper intensity standardization and image registration can improve mpMRI quality for use in AI algorithms, and developed a novel method to decrease mpMRI acquisition time. Subsequently, in the second specific aim of this dissertation, we investigated underlying questions regarding the implementation of RT-related auto-segmentation. Firstly, we quantified interobserver variability for an unprecedented large number of observers for various radiotherapy structures in several disease sites (with a particular emphasis on OPC) using a novel crowdsourcing platform. We then trained an AI algorithm on a series of extant matched mpMRI datasets to segment OPC primary tumors. Moreover, we validated and compared our best model\u27s performance to clinical expert observers. We demonstrated that AI-based mpMRI OPC tumor auto-segmentation offers decreased variability and comparable accuracy to clinical experts, and certain mpMRI input channel combinations could further improve performance. Finally, in the third specific aim of this dissertation, we predicted OPC primary tumor mid-therapy (rapid) treatment response and prognostic outcomes. Using co-registered pre-therapy and mid-therapy primary tumor manual segmentations of OPC patients, we generated and characterized treatment sensitive and treatment resistant pre-RT sub-volumes. These sub-volumes were used to train an AI algorithm to predict individual voxel-wise treatment resistance. Additionally, we developed an AI algorithm to predict OPC patient progression free survival using pre-therapy imaging from an international data science competition (ranking 1st place), and then translated these approaches to mpMRI data. We demonstrated AI models could be used to predict rapid response and prognostic outcomes using pre-therapy imaging, which could help guide treatment adaptation, though further work is needed. In summary, the completion of these aims facilitates the development of an image-guided fully automated OPC clinical decision support tool. The resultant deliverables from this project will positively impact patients by enabling optimized therapeutic interventions in OPC. Future work should consider investigating additional imaging timepoints, imaging modalities, uncertainty quantification, perceptual and ethical considerations, and prospective studies for eventual clinical implementation. A dynamic version of this dissertation is publicly available and assigned a digital object identifier through Figshare (doi: 10.6084/m9.figshare.22141871)

    Segmentierung medizinischer Bilddaten und bildgestützte intraoperative Navigation

    Get PDF
    Die Entwicklung von Algorithmen zur automatischen oder semi-automatischen Verarbeitung von medizinischen Bilddaten hat in den letzten Jahren mehr und mehr an Bedeutung gewonnen. Das liegt zum einen an den immer besser werdenden medizinischen Aufnahmemodalitäten, die den menschlichen Körper immer feiner virtuell abbilden können. Zum anderen liegt dies an der verbesserten Computerhardware, die eine algorithmische Verarbeitung der teilweise im Gigabyte-Bereich liegenden Datenmengen in einer vernünftigen Zeit erlaubt. Das Ziel dieser Habilitationsschrift ist die Entwicklung und Evaluation von Algorithmen für die medizinische Bildverarbeitung. Insgesamt besteht die Habilitationsschrift aus einer Reihe von Publikationen, die in drei übergreifende Themenbereiche gegliedert sind: -Segmentierung medizinischer Bilddaten anhand von vorlagenbasierten Algorithmen -Experimentelle Evaluation quelloffener Segmentierungsmethoden unter medizinischen Einsatzbedingungen -Navigation zur Unterstützung intraoperativer Therapien Im Bereich Segmentierung medizinischer Bilddaten anhand von vorlagenbasierten Algorithmen wurden verschiedene graphbasierte Algorithmen in 2D und 3D entwickelt, die einen gerichteten Graphen mittels einer Vorlage aufbauen. Dazu gehört die Bildung eines Algorithmus zur Segmentierung von Wirbeln in 2D und 3D. In 2D wird eine rechteckige und in 3D eine würfelförmige Vorlage genutzt, um den Graphen aufzubauen und das Segmentierungsergebnis zu berechnen. Außerdem wird eine graphbasierte Segmentierung von Prostatadrüsen durch eine Kugelvorlage zur automatischen Bestimmung der Grenzen zwischen Prostatadrüsen und umliegenden Organen vorgestellt. Auf den vorlagenbasierten Algorithmen aufbauend, wurde ein interaktiver Segmentierungsalgorithmus, der einem Benutzer in Echtzeit das Segmentierungsergebnis anzeigt, konzipiert und implementiert. Der Algorithmus nutzt zur Segmentierung die verschiedenen Vorlagen, benötigt allerdings nur einen Saatpunkt des Benutzers. In einem weiteren Ansatz kann der Benutzer die Segmentierung interaktiv durch zusätzliche Saatpunkte verfeinern. Dadurch wird es möglich, eine semi-automatische Segmentierung auch in schwierigen Fällen zu einem zufriedenstellenden Ergebnis zu führen. Im Bereich Evaluation quelloffener Segmentierungsmethoden unter medizinischen Einsatzbedingungen wurden verschiedene frei verfügbare Segmentierungsalgorithmen anhand von Patientendaten aus der klinischen Routine getestet. Dazu gehörte die Evaluierung der semi-automatischen Segmentierung von Hirntumoren, zum Beispiel Hypophysenadenomen und Glioblastomen, mit der frei verfügbaren Open Source-Plattform 3D Slicer. Dadurch konnte gezeigt werden, wie eine rein manuelle Schicht-für-Schicht-Vermessung des Tumorvolumens in der Praxis unterstützt und beschleunigt werden kann. Weiterhin wurde die Segmentierung von Sprachbahnen in medizinischen Aufnahmen von Hirntumorpatienten auf verschiedenen Plattformen evaluiert. Im Bereich Navigation zur Unterstützung intraoperativer Therapien wurden Softwaremodule zum Begleiten von intra-operativen Eingriffen in verschiedenen Phasen einer Behandlung (Therapieplanung, Durchführung, Kontrolle) entwickelt. Dazu gehört die erstmalige Integration des OpenIGTLink-Netzwerkprotokolls in die medizinische Prototyping-Plattform MeVisLab, die anhand eines NDI-Navigationssystems evaluiert wurde. Außerdem wurde hier ebenfalls zum ersten Mal die Konzeption und Implementierung eines medizinischen Software-Prototypen zur Unterstützung der intraoperativen gynäkologischen Brachytherapie vorgestellt. Der Software-Prototyp enthielt auch ein Modul zur erweiterten Visualisierung bei der MR-gestützten interstitiellen gynäkologischen Brachytherapie, welches unter anderem die Registrierung eines gynäkologischen Brachytherapie-Instruments in einen intraoperativen Datensatz einer Patientin ermöglichte. Die einzelnen Module führten zur Vorstellung eines umfassenden bildgestützten Systems für die gynäkologische Brachytherapie in einem multimodalen Operationssaal. Dieses System deckt die prä-, intra- und postoperative Behandlungsphase bei einer interstitiellen gynäkologischen Brachytherapie ab

    Methods for the integration of combined PET/MR into radiotherapy planning

    Get PDF
    Despite recent advances in radiotherapy (RT) there are still tumor types for which a high fraction of recurrences is observed following treatment. Limiting factors in current treatment concepts seem to be inaccuracies in image-based tumor delineation and missing consideration of the biological heterogeneity of individual tumors. In this respect, the abundant anatomical and functional information provided by magnetic resonance imaging (MRI) and positron emission tomography (PET) may lead to major advances in RT treatment. Recently available combined PET/MR scanners allow for the acquisition of simultaneous, intrinsically registered PET/MR data, facilitating their combined analysis for the integration into RT. In this thesis, dedicated methods and algorithms for the analysis and integration of the multimodal PET/MR datasets into RT are developed. In the first part, a method for multimodal deformable registration is developed, to enable the spatial transformation of PET/MR data to the computed tomography used for treatment planning. The second part is concerned with the development of an automatic tumor segmentation algorithm, considering PET and MR information simultaneously. In the last part, a correlation analysis of various functional datasets is motivated and performed in order to support the definition of a biologically adapted dose prescription.Trotz jüngster Fortschritte in der Strahlentherapie (ST) gibt es noch immer Tumorarten mit einem hohen Prozentsatz an Rezidiven nach der Behandlung. Limitierende Faktoren in aktuellen Behandlungskonzepten scheinen vor allem Ungenauigkeiten in der bildbasierten Tumorabgrenzung sowie die fehlende Berücksichtigung der biologischen Heterogenität der einzelnen Tumoren zu sein. In dieser Hinsicht erscheint die Einbeziehung der vielfältigen anatomischen und funktionellen Bildgebungsmöglichkeiten der Magnetresonanztomographie (MRT), sowie der Positronenemissionstomographie (PET), in die ST vielversprechend. Seit kurzem verfügbare PET/MR-Scanner erlauben die Akquisition simultaner, intrinsisch registrierter PET/MR-Datensätze, wodurch deren kombinierte Analyse und Integration in die Therapieplanung erleichtert wird. Diese Arbeit befasst sich mit der Entwicklung von dedizierten Methoden und Algorithmen für die Analyse und Integration der multimodalen PET/MR-Datensätze in die ST. Im ersten Teilprojekt wurde eine Methode zur multimodalen deformierbaren Registrierung entwickelt, um die räumliche Transformation der PET/MR-Daten auf die zur Therapieplanung notwendige Computertomographie-Aufnahme zu ermöglichen. Im zweiten Teil wurde ein Algorithmus zur automatischen Tumorsegmentierung unter simultaner Berücksichtigung von PET- und MR-Information entwickelt. Abschließend wurde im dritten Teil eine Korrelationsanalyse der funktionellen PET- und MR-Datensätze motiviert und ausgeführt, um die Definition einer biologisch adaptierten Dosisverschreibung zu unterstützen

    CT Scanning

    Get PDF
    Since its introduction in 1972, X-ray computed tomography (CT) has evolved into an essential diagnostic imaging tool for a continually increasing variety of clinical applications. The goal of this book was not simply to summarize currently available CT imaging techniques but also to provide clinical perspectives, advances in hybrid technologies, new applications other than medicine and an outlook on future developments. Major experts in this growing field contributed to this book, which is geared to radiologists, orthopedic surgeons, engineers, and clinical and basic researchers. We believe that CT scanning is an effective and essential tools in treatment planning, basic understanding of physiology, and and tackling the ever-increasing challenge of diagnosis in our society
    corecore