31 research outputs found

    Deep learning for medical image processing

    Get PDF
    Medical image segmentation represents a fundamental aspect of medical image computing. It facilitates measurements of anatomical structures, like organ volume and tissue thickness, critical for many classification algorithms which can be instrumental for clinical diagnosis. Consequently, enhancing the efficiency and accuracy of segmentation algorithms could lead to considerable improvements in patient care and diagnostic precision. In recent years, deep learning has become the state-of-the-art approach in various domains of medical image computing, including medical image segmentation. The key advantages of deep learning methods are their speed and efficiency, which have the potential to transform clinical practice significantly. Traditional algorithms might require hours to perform complex computations, but with deep learning, such computational tasks can be executed much faster, often within seconds. This thesis focuses on two distinct segmentation strategies: voxel-based and surface-based. Voxel-based segmentation assigns a class label to each individual voxel of an image. On the other hand, surface-based segmentation techniques involve reconstructing a 3D surface from the input images, then segmenting that surface into different regions. This thesis presents multiple methods for voxel-based image segmentation. Here, the focus is segmenting brain structures, white matter hyperintensities, and abdominal organs. Our approaches confront challenges such as domain adaptation, learning with limited data, and optimizing network architectures to handle 3D images. Additionally, the thesis discusses ways to handle the failure cases of standard deep learning approaches, such as dealing with rare cases like patients who have undergone organ resection surgery. Finally, the thesis turns its attention to cortical surface reconstruction and parcellation. Here, deep learning is used to extract cortical surfaces from MRI scans as triangular meshes and parcellate these surfaces on a vertex level. The challenges posed by this approach include handling irregular and topologically complex structures. This thesis presents novel deep learning strategies for voxel-based and surface-based medical image segmentation. By addressing specific challenges in each approach, it aims to contribute to the ongoing advancement of medical image computing.Die Segmentierung medizinischer Bilder stellt einen fundamentalen Aspekt der medizinischen Bildverarbeitung dar. Sie erleichtert Messungen anatomischer Strukturen, wie Organvolumen und Gewebedicke, die für viele Klassifikationsalgorithmen entscheidend sein können und somit für klinische Diagnosen von Bedeutung sind. Daher könnten Verbesserungen in der Effizienz und Genauigkeit von Segmentierungsalgorithmen zu erheblichen Fortschritten in der Patientenversorgung und diagnostischen Genauigkeit führen. Deep Learning hat sich in den letzten Jahren als führender Ansatz in verschiedenen Be-reichen der medizinischen Bildverarbeitung etabliert. Die Hauptvorteile dieser Methoden sind Geschwindigkeit und Effizienz, die die klinische Praxis erheblich verändern können. Traditionelle Algorithmen benötigen möglicherweise Stunden, um komplexe Berechnungen durchzuführen, mit Deep Learning können solche rechenintensiven Aufgaben wesentlich schneller, oft innerhalb von Sekunden, ausgeführt werden. Diese Dissertation konzentriert sich auf zwei Segmentierungsstrategien, die voxel- und oberflächenbasierte Segmentierung. Die voxelbasierte Segmentierung weist jedem Voxel eines Bildes ein Klassenlabel zu, während oberflächenbasierte Techniken eine 3D-Oberfläche aus den Eingabebildern rekonstruieren und segmentieren. In dieser Arbeit werden mehrere Methoden für die voxelbasierte Bildsegmentierung vorgestellt. Der Fokus liegt hier auf der Segmentierung von Gehirnstrukturen, Hyperintensitäten der weißen Substanz und abdominellen Organen. Unsere Ansätze begegnen Herausforderungen wie der Anpassung an verschiedene Domänen, dem Lernen mit begrenzten Daten und der Optimierung von Netzwerkarchitekturen, um 3D-Bilder zu verarbeiten. Darüber hinaus werden in dieser Dissertation Möglichkeiten erörtert, mit den Fehlschlägen standardmäßiger Deep-Learning-Ansätze umzugehen, beispielsweise mit seltenen Fällen nach einer Organresektion. Schließlich legen wir den Fokus auf die Rekonstruktion und Parzellierung von kortikalen Oberflächen. Hier wird Deep Learning verwendet, um kortikale Oberflächen aus MRT-Scans als Dreiecksnetz zu extrahieren und diese Oberflächen auf Knoten-Ebene zu parzellieren. Zu den Herausforderungen dieses Ansatzes gehört der Umgang mit unregelmäßigen und topologisch komplexen Strukturen. Diese Arbeit stellt neuartige Deep-Learning-Strategien für die voxel- und oberflächenbasierte medizinische Segmentierung vor. Durch die Bewältigung spezifischer Herausforderungen in jedem Ansatz trägt sie so zur Weiterentwicklung der medizinischen Bildverarbeitung bei

    Multimodal intra- and inter-subject nonrigid registration of small animal images.

    Get PDF

    Effects of rigid and non-rigid image registration on test-retest variability of quantitative [18F]FDG PET/CT studies

    Get PDF
    ABSTRACT: BACKGROUND: [18F]fluoro-2-deoxy-D-glucose ([18F]FDG) positron emission tomography (PET) is a valuable tool for monitoring response to therapy in oncology. In longitudinal studies, however, patients are not scanned in exactly the same position. Rigid and non-rigid image registration can be applied in order to reuse baseline volumes of interest (VOI) on consecutive studies of the same patient. The purpose of this study was to investigate the impact of various image registration strategies on standardized uptake value (SUV) and metabolic volume test-retest variability (TRT). METHODS: Test-retest whole-body [18F]FDG PET/CT scans were collected retrospectively for 11 subjects with advanced gastrointestinal malignancies (colorectal carcinoma). Rigid and non-rigid image registration techniques with various degrees of locality were applied to PET, CT, and non-attenuation corrected PET (NAC) data. VOI were drawn independently on both test and retest scans. VOI drawn on test scans were projected onto retest scans and the overlap between projected VOI and manually drawn retest VOI was quantified using the Dice similarity coefficient (DSC). In addition, absolute (unsigned) differences in TRT of SUVmax, SUVmean, metabolic volume and total lesion glycolysis (TLG) were calculated in on one hand the test VOI and on the other hand the retest VOI and projected VOI. Reference values were obtained by delineating VOIs on both scans separately. RESULTS: Non-rigid PET registration showed the best performance (median DSC: 0.82, other methods: 0.71-0.81). Compared with the reference, none of the registration types showed significant absolute differences in TRT of SUVmax, SUVmean and TLG (p > 0.05). Only for absolute TRT of metabolic volume, significant lower values (p < 0.05) were observed for all registration strategies when compared to delineating VOIs separately, except for non-rigid PET registrations (p = 0.1). Non-rigid PET registration provided good volume TRT (7.7%) that was smaller than the reference (16%). CONCLUSION: In particular, non-rigid PET image registration showed good performance similar to delineating VOI on both scans separately, and with smaller TRT in metabolic volume estimates.van Velden F.H.P., van Beers P., Nuyts J., Velasquez L.M., Hayes W., Lammertsma A.A., Boellaard R., Loeckx D., ''Effects of rigid and non-rigid image registration on test-retest variability of quantitative [18F]FDG PET/CT studies'', EJNMMI research, vol. 2, no. 10, 2012.status: publishe

    3-D data handling and registration of multiple modality medical images

    Get PDF
    The many different clinical imaging modalities used in diagnosis and therapy deliver two different types of information: morphological and functional. Clinical interpretation can be assisted and enhanced by combining such information (e.g. superimposition or fusion). The handling of such data needs to be performed in 3-D. Various methods for registration developed by other authors are reviewed and compared. Many of these are based on registering external reference markers, and are cumbersome and present significant problems to both patients and operators. Internal markers have also been used, but these may be very difficult to identify. Alternatively, methods based on the external surface of an object have been developed which eliminate some of the problems associated with the other methods. Thus the methods which have been extended, developed, and described here, are based primarily on the fitting of surfaces, as determined from images obtained from the different modalities to be registered. Annex problems to that of the surface fitting are those of surface detection and display. Some segmentation and surface reconstruction algorithms have been developed to identify the surface to be registered. Surface and volume rendering algorithms have also been implemented to facilitate the display of clinical results. An iterative surface fitting algorithm has been developed based on the minimization of a least squares distance (LSD) function, using the Powell method and alternative minimization algorithms. These algorithms and the qualities of fit so obtained were intercompared. Some modifications were developed to enhance the speed of convergence, to improve the accuracy, and to enhance the display of results during the process of fitting. A common problem with all such methods was found to be the choice of the starting point (the initial transformation parameters) and the avoidance of local minima which often require manual operator intervention. The algorithm was modified to apply a global minimization by using a cumulative distance error in a sequentially terminated process in order to speed up the time of evaluating of each search location. An extension of the algorithm into multi-resolution (scale) space was also implemented. An initial global search is performed at coarse resolution for the 3-D surfaces of both modalities where an appropriate threshold is defined to reject likely mismatch transformations by testing of only a limited subset of surface points. This process is used to define the set of points in the transformation space to be used for the next level of resolution, again with appropriately chosen threshold levels, and continued down to the finest resolution level. All these processes were evaluated using sets of well defined image models. The assessment of this algorithm for 3-D surface registration of data from (3-D) MRI with MRI, MRI with PET, MRI with SPECT, and MRI with CT data is presented, and clinical examples are illustrated and assessed. In the current work, the data from multi-modality imaging of two different types phantom (e.g. Hoffman brain phantom, Jaszczak phantom), thirty routinely imaged patients and volunteer subjects, and ten patients with setting external markers on their head were used to assess and verify 3-D registration. The accuracy of the sequential multi-resolution method obtained by the distance values of 4-10 selected reference points on each data set gave an accuracy of 1.44±0.42 mm for MR-MR, 1.82±0.65 for MR-CT, 2.38±0.88 for MR-PET, and 3.17±1.12 for MR-SPECT registration. The cost of this process was determined to be of the order of 200 seconds (on a Micro-VAX II), although this is highly dependent on some adjustable parameters of the process (e.g. threshold and the size of the geometrical transformation space) by which the accuracy is aimed

    Medical image registration and soft tissue deformation for image guided surgery system

    Get PDF
    In parallel with the developments in imaging modalities, image-guided surgery (IGS) can now provide the surgeon with high quality three-dimensional images depicting human anatomy. Although IGS is now in widely use in neurosurgery, there remain some limitations that must be overcome before it can be employed in more general minimally invasive procedures. In this thesis, we have developed several contributions to the field of medical image registration and brain tissue deformation modeling. From the methodology point of view, medical image registration algorithms can be classified into feature-based and intensity-based methods. One of the challenges faced by feature-based registration would be to determine which specific type of feature is desired for a given task and imaging type. For this reason, a point set registration using points and curves feature is proposed, which has the accuracy of registration based on points and the robustness of registration based on lines or curves. We have also tackled the problem on rigid registration of multimodal images using intensity-based similarity measures. Mutual information (MI) has emerged in recent years as a popular similarity metric and widely being recognized in the field of medical image registration. Unfortunately, it ignores the spatial information contained in the images such as edges and corners that might be useful in the image registration. We introduce a new similarity metric, called Adaptive Mutual Information (AMI) measure which incorporates the gradient spatial information. Salient pixels in the regions with high gradient value will contribute more in the estimation of mutual information of image pairs being registered. Experimental results showed that our proposed method improves registration accuracy and it is more robust to noise images which have large deviation from the reference image. Along with this direction, we further improve the technique to simultaneously use all information obtained from multiple features. Using multiple spatial features, the proposed algorithm is less sensitive to the effect of noise and some inherent variations, giving more accurate registration. Brain shift is a complex phenomenon and there are many different reasons causing brain deformation. We have investigated the pattern of brain deformation with respect to location and magnitude and to consider the implications of this pattern for correcting brain deformation in IGS systems. A computational finite element analysis was carried out to analyze the deformation and stress tensor experienced by the brain tissue during surgical operations. Finally, we have developed a prototype visualization display and navigation platform for interpretation of IGS. The system is based upon Qt (cross-platform GUI toolkit) and it integrates VTK (an object-oriented visualization library) as the rendering kernel. Based on the construction of a visualization software platform, we have laid a foundation on the future research to be extended to implement brain tissue deformation into the system

    Medical image registration and soft tissue deformation for image guided surgery system

    Get PDF
    In parallel with the developments in imaging modalities, image-guided surgery (IGS) can now provide the surgeon with high quality three-dimensional images depicting human anatomy. Although IGS is now in widely use in neurosurgery, there remain some limitations that must be overcome before it can be employed in more general minimally invasive procedures. In this thesis, we have developed several contributions to the field of medical image registration and brain tissue deformation modeling. From the methodology point of view, medical image registration algorithms can be classified into feature-based and intensity-based methods. One of the challenges faced by feature-based registration would be to determine which specific type of feature is desired for a given task and imaging type. For this reason, a point set registration using points and curves feature is proposed, which has the accuracy of registration based on points and the robustness of registration based on lines or curves. We have also tackled the problem on rigid registration of multimodal images using intensity-based similarity measures. Mutual information (MI) has emerged in recent years as a popular similarity metric and widely being recognized in the field of medical image registration. Unfortunately, it ignores the spatial information contained in the images such as edges and corners that might be useful in the image registration. We introduce a new similarity metric, called Adaptive Mutual Information (AMI) measure which incorporates the gradient spatial information. Salient pixels in the regions with high gradient value will contribute more in the estimation of mutual information of image pairs being registered. Experimental results showed that our proposed method improves registration accuracy and it is more robust to noise images which have large deviation from the reference image. Along with this direction, we further improve the technique to simultaneously use all information obtained from multiple features. Using multiple spatial features, the proposed algorithm is less sensitive to the effect of noise and some inherent variations, giving more accurate registration. Brain shift is a complex phenomenon and there are many different reasons causing brain deformation. We have investigated the pattern of brain deformation with respect to location and magnitude and to consider the implications of this pattern for correcting brain deformation in IGS systems. A computational finite element analysis was carried out to analyze the deformation and stress tensor experienced by the brain tissue during surgical operations. Finally, we have developed a prototype visualization display and navigation platform for interpretation of IGS. The system is based upon Qt (cross-platform GUI toolkit) and it integrates VTK (an object-oriented visualization library) as the rendering kernel. Based on the construction of a visualization software platform, we have laid a foundation on the future research to be extended to implement brain tissue deformation into the system

    A Framework for Tumor Localization in Robot-Assisted Minimally Invasive Surgery

    Get PDF
    Manual palpation of tissue is frequently used in open surgery, e.g., for localization of tumors and buried vessels and for tissue characterization. The overall objective of this work is to explore how tissue palpation can be performed in Robot-Assisted Minimally Invasive Surgery (RAMIS) using laparoscopic instruments conventionally used in RAMIS. This thesis presents a framework where a surgical tool is moved teleoperatively in a manner analogous to the repetitive pressing motion of a finger during manual palpation. We interpret the changes in parameters due to this motion such as the applied force and the resulting indentation depth to accurately determine the variation in tissue stiffness. This approach requires the sensorization of the laparoscopic tool for force sensing. In our work, we have used a da Vinci needle driver which has been sensorized in our lab at CSTAR for force sensing using Fiber Bragg Grating (FBG). A computer vision algorithm has been developed for 3D surgical tool-tip tracking using the da Vinci \u27s stereo endoscope. This enables us to measure changes in surface indentation resulting from pressing the needle driver on the tissue. The proposed palpation framework is based on the hypothesis that the indentation depth is inversely proportional to the tissue stiffness when a constant pressing force is applied. This was validated in a telemanipulated setup using the da Vinci surgical system with a phantom in which artificial tumors were embedded to represent areas of different stiffnesses. The region with high stiffness representing tumor and region with low stiffness representing healthy tissue showed an average indentation depth change of 5.19 mm and 10.09 mm respectively while maintaining a maximum force of 8N during robot-assisted palpation. These indentation depth variations were then distinguished using the k-means clustering algorithm to classify groups of low and high stiffnesses. The results were presented in a colour-coded map. The unique feature of this framework is its use of a conventional laparoscopic tool and minimal re-design of the existing da Vinci surgical setup. Additional work includes a vision-based algorithm for tracking the motion of the tissue surface such as that of the lung resulting from respiratory and cardiac motion. The extracted motion information was analyzed to characterize the lung tissue stiffness based on the lateral strain variations as the surface inflates and deflates

    Towards development of automatic path planning system in image-guided neurosurgery

    Get PDF
    With the advent of advanced computer technology, many computer-aided systems have evolved to assist in medical related work including treatment, diagnosis, and even surgery. In modern neurosurgery, Magnetic Resonance Image guided stereotactic surgery exactly complies with this trend. It is a minimally invasive operation being much safer than the traditional open-skull surgery, and offers higher precision and more effective operating procedures compared to conventional craniotomy. However, such operations still face significant challenges of planning the optimal neurosurgical path in order to reach the ideal position without damage to important internal structures. This research aims to address this major challenge. The work begins with an investigation of the problem of distortion induced by MR images. It then goes on to build a template of the Circle of Wills brain vessels, realized from a collection of Magnetic Resonance Angiography images, which is needed to maintain operating standards when, as in many cases, Magnetic Resonance Angiography images are not available for patients. Demographic data of brain tumours are also studied to obtain further understanding of diseased human brains through the development of an effect classifier. The developed system allows the internal brain structure to be ‘seen’ clearly before the surgery, giving surgeons a clear picture and thereby makes a significant contribution to the eventual development of a fully automatic path planning system

    Intelligent image-driven motion modelling for adaptive radiotherapy

    Get PDF
    Internal anatomical motion (e.g. respiration-induced motion) confounds the precise delivery of radiation to target volumes during external beam radiotherapy. Precision is, however, critical to ensure prescribed radiation doses are delivered to the target (tumour) while surrounding healthy tissues are preserved from damage. If the motion itself can be accurately estimated, the treatment plan and/or delivery can be adapted to compensate. Current methods for motion estimation rely either on invasive implanted fiducial markers, imperfect surrogate models based, for example, on external optical measurements or breathing traces, or expensive and rare systems like in-treatment MRI. These methods have limitations such as invasiveness, imperfect modelling, or high costs, underscoring the need for more efficient and accessible approaches to accurately estimate motion during radiation treatment. This research, in contrast, aims to achieve accurate motion prediction using only relatively low-quality, but almost universally available planar X-ray imaging. This is challenging since such images have poor soft tissue contrast and provide only 2D projections through the anatomy. However, our hypothesis suggests that, with strong priors in the form of learnt models for anatomical motion and image appearance, these images can provide sufficient information for accurate 3D motion reconstruction. We initially proposed an end-to-end graph neural network (GNN) architecture aimed at learning mesh regression using a patient-specific template organ geometry and deep features extracted from kV images at arbitrary projection angles. However, this approach proved to be more time-consuming during training. As an alternative, a second framework was proposed, based on a self-attention convolutional neural network (CNN) architecture. This model focuses on learning mappings between deep semantic angle-dependent X-ray image features and the corresponding encoded deformation latent representations of deformed point clouds of the patient's organ geometry. Both frameworks underwent quantitative testing on synthetic respiratory motion scenarios and qualitative assessment on in-treatment images obtained over a full scan series for liver cancer patients. For the first framework, the overall mean prediction errors on synthetic motion test datasets were 0.16±0.13 mm, 0.18±0.19 mm, 0.22±0.34 mm, and 0.12±0.11 mm, with mean peak prediction errors of 1.39 mm, 1.99 mm, 3.29 mm, and 1.16 mm. As for the second framework, the overall mean prediction errors on synthetic motion test datasets were 0.065±0.04 mm, 0.088±0.06 mm, 0.084±0.04 mm, and 0.059±0.04 mm, with mean peak prediction errors of 0.29 mm, 0.39 mm, 0.30 mm, and 0.25 mm
    corecore