55 research outputs found

    Image-based registration methods for quantification and compensation of prostate motion during trans-rectal ultrasound (TRUS)-guided biopsy

    Get PDF
    Prostate biopsy is the clinical standard for cancer diagnosis and is typically performed under two-dimensional (2D) transrectal ultrasound (TRUS) for needle guidance. Unfortunately, most early stage prostate cancers are not visible on ultrasound and the procedure suffers from high false negative rates due to the lack of visible targets. Fusion of pre-biopsy MRI to 3D TRUS for targeted biopsy could improve cancer detection rates and volume of tumor sampled. In MRI-TRUS fusion biopsy systems, patient or prostate motion during the procedure causes misalignments in the MR targets mapped to the live 2D TRUS images, limiting the targeting accuracy of the biopsy system. In order to sample smallest clinically significant tumours of 0.5 cm3with 95% confidence, the root mean square (RMS) error of the biopsy system needs to be The target misalignments due to intermittent prostate motion during the procedure can be compensated by registering the live 2D TRUS images acquired during the biopsy procedure to the pre-acquired baseline 3D TRUS image. The registration must be performed both accurately and quickly in order to be useful during the clinical procedure. We developed an intensity-based 2D-3D rigid registration algorithm and validated it by calculating the target registration error (TRE) using manually identified fiducials within the prostate. We discuss two different approaches that can be used to improve the robustness of this registration to meet the clinical requirements. Firstly, we evaluated the impact of intra-procedural 3D TRUS imaging on motion compensation accuracy since the limited anatomical context available in live 2D TRUS images could limit the robustness of the 2D-3D registration. The results indicated that TRE improved when intra-procedural 3D TRUS images were used in registration, with larger improvements in the base and apex regions as compared with the mid-gland region. Secondly, we developed and evaluated a registration algorithm whose optimization is based on learned prostate motion characteristics. Compared to our initial approach, the updated optimization improved the robustness during 2D-3D registration by reducing the number of registrations with a TRE \u3e 5 mm from 9.2% to 1.2% with an overall RMS TRE of 2.3 mm. The methods developed in this work were intended to improve the needle targeting accuracy of 3D TRUS-guided biopsy systems. The successful integration of the techniques into current 3D TRUS-guided systems could improve the overall cancer detection rate during the biopsy and help to achieve earlier diagnosis and fewer repeat biopsy procedures in prostate cancer diagnosis

    Medical Image Registration Using Deep Neural Networks

    Get PDF
    Registration is a fundamental problem in medical image analysis wherein images are transformed spatially to align corresponding anatomical structures in each image. Recently, the development of learning-based methods, which exploit deep neural networks and can outperform classical iterative methods, has received considerable interest from the research community. This interest is due in part to the substantially reduced computational requirements that learning-based methods have during inference, which makes them particularly well-suited to real-time registration applications. Despite these successes, learning-based methods can perform poorly when applied to images from different modalities where intensity characteristics can vary greatly, such as in magnetic resonance and ultrasound imaging. Moreover, registration performance is often demonstrated on well-curated datasets, closely matching the distribution of the training data. This makes it difficult to determine whether demonstrated performance accurately represents the generalization and robustness required for clinical use. This thesis presents learning-based methods which address the aforementioned difficulties by utilizing intuitive point-set-based representations, user interaction and meta-learning-based training strategies. Primarily, this is demonstrated with a focus on the non-rigid registration of 3D magnetic resonance imaging to sparse 2D transrectal ultrasound images to assist in the delivery of targeted prostate biopsies. While conventional systematic prostate biopsy methods can require many samples to be taken to confidently produce a diagnosis, tumor-targeted approaches have shown improved patient, diagnostic, and disease management outcomes with fewer samples. However, the available intraoperative transrectal ultrasound imaging alone is insufficient for accurate targeted guidance. As such, this exemplar application is used to illustrate the effectiveness of sparse, interactively-acquired ultrasound imaging for real-time, interventional registration. The presented methods are found to improve registration accuracy, relative to state-of-the-art, with substantially lower computation time and require a fraction of the data at inference. As a result, these methods are particularly attractive given their potential for real-time registration in interventional applications

    Real-time multimodal image registration with partial intraoperative point-set data

    Get PDF
    We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial

    A Novel System and Image Processing for Improving 3D Ultrasound-guided Interventional Cancer Procedures

    Get PDF
    Image-guided medical interventions are diagnostic and therapeutic procedures that focus on minimizing surgical incisions for improving disease management and reducing patient burden relative to conventional techniques. Interventional approaches, such as biopsy, brachytherapy, and ablation procedures, have been used in the management of cancer for many anatomical regions, including the prostate and liver. Needles and needle-like tools are often used for achieving planned clinical outcomes, but the increased dependency on accurate targeting, guidance, and verification can limit the widespread adoption and clinical scope of these procedures. Image-guided interventions that incorporate 3D information intraoperatively have been shown to improve the accuracy and feasibility of these procedures, but clinical needs still exist for improving workflow and reducing physician variability with widely applicable cost-conscience approaches. The objective of this thesis was to incorporate 3D ultrasound (US) imaging and image processing methods during image-guided cancer interventions in the prostate and liver to provide accessible, fast, and accurate approaches for clinical improvements. An automatic 2D-3D transrectal ultrasound (TRUS) registration algorithm was optimized and implemented in a 3D TRUS-guided system to provide continuous prostate motion corrections with sub-millimeter and sub-degree error in 36 ± 4 ms. An automatic and generalizable 3D TRUS prostate segmentation method was developed on a diverse clinical dataset of patient images from biopsy and brachytherapy procedures, resulting in errors at gold standard accuracy with a computation time of 0.62 s. After validation of mechanical and image reconstruction accuracy, a novel 3D US system for focal liver tumor therapy was developed to guide therapy applicators with 4.27 ± 2.47 mm error. The verification of applicators post-insertion motivated the development of a 3D US applicator segmentation approach, which was demonstrated to provide clinically feasible assessments in 0.246 ± 0.007 s. Lastly, a general needle and applicator tool segmentation algorithm was developed to provide accurate intraoperative and real-time insertion feedback for multiple anatomical locations during a variety of clinical interventional procedures. Clinical translation of these developed approaches has the potential to extend the overall patient quality of life and outcomes by improving detection rates and reducing local cancer recurrence in patients with prostate and liver cancer

    Registration of magnetic resonance and ultrasound images for guiding prostate cancer interventions

    Get PDF
    Prostate cancer is a major international health problem with a large and rising incidence in many parts of the world. Transrectal ultrasound (TRUS) imaging is used routinely to guide surgical procedures, such as needle biopsy and a number of minimally-invasive therapies, but its limited ability to visualise prostate cancer is widely recognised. Magnetic resonance (MR) imaging techniques, on the other hand, have recently been developed that can provide clinically useful diagnostic information. Registration (or alignment) of MR and TRUS images during TRUS-guided surgical interventions potentially provides a cost-effective approach to augment TRUS images with clinically useful, MR-derived information (for example, tumour location, shape and size). This thesis describes a deformable image registration framework that enables automatic and/or semi-automatic alignment of MR and 3D TRUS images of the prostate gland. The method combines two technical developments in the field: First, a method for constructing patient-specific statistical shape models of prostate motion/deformation, based on learning from finite element simulations of gland motion using geometric data from a preoperative MR image, is proposed. Second, a novel “model-to-image” registration framework is developed to register this statistical shape model automatically to an intraoperative TRUS image. This registration approach is implemented using a novel model-to-image vector alignment (MIVA) algorithm, which maximises the likelihood of a particular instance of a statistical shape model given a voxel-intensity-based feature vector that represents an estimate of the surface normal vectors at the boundary of the organ in question. Using real patient data, the MR-TRUS registration accuracy of the new algorithm is validated using intra-prostatic anatomical landmarks. A rigorous and extensive validation analysis is also provided for assessing the image registration experiments. The final target registration error after performing 100 MR–TRUS registrations for each patient have a median of 2.40 mm, meaning that over 93% registrations may successfully hit the target representing a clinically significant lesion. The implemented registration algorithms took less than 30 seconds and 2 minutes for manually defined point- and normal vector features, respectively. The thesis concludes with a summary of potential applications and future research directions

    Enabling technologies for MRI guided interventional procedures

    Get PDF
    This dissertation addresses topics related to developing interventional assistant devices for Magnetic Resonance Imaging (MRI). MRI can provide high-quality 3D visualization of target anatomy and surrounding tissue, but the benefits can not be readily harnessed for interventional procedures due to difficulties associated with the use of high-field (1.5T or greater) MRI. Discussed are potential solutions to the inability to use conventional mecha- tronics and the confined physical space in the scanner bore. This work describes the development of two apparently dissimilar systems that repre- sent different approaches to the same surgical problem - coupling information and action to perform percutaneous (through the skin) needle placement with MR imaging. The first system addressed takes MR images and projects them along with a surgical plan directly on the interventional site, thus providing in-situ imaging. With anatomical images and a corresponding plan visible in the appropriate pose, the clinician can use this information to perform the surgical action. My primary research effort has focused on a robotic assistant system that overcomes the difficulties inherent to MR-guided procedures, and promises safe and reliable intra-prostatic needle placement inside closed high-field MRI scanners. The robot is a servo pneumatically operated automatic needle guide, and effectively guides needles under real- time MR imaging. This thesis describes development of the robotic system including requirements, workspace analysis, mechanism design and optimization, and evaluation of MR compatibility. Further, a generally applicable MR-compatible robot controller is de- veloped, the pneumatic control system is implemented and evaluated, and the system is deployed in pre-clinical trials. The dissertation concludes with future work and lessons learned from this endeavor

    Recalage déformable à base de graphes : mise en correspondance coupe-vers-volume et méthodes contextuelles

    Get PDF
    Image registration methods, which aim at aligning two or more images into one coordinate system, are among the oldest and most widely used algorithms in computer vision. Registration methods serve to establish correspondence relationships among images (captured at different times, from different sensors or from different viewpoints) which are not obvious for the human eye. A particular type of registration algorithm, known as graph-based deformable registration methods, has become popular during the last decade given its robustness, scalability, efficiency and theoretical simplicity. The range of problems to which it can be adapted is particularly broad. In this thesis, we propose several extensions to the graph-based deformable registration theory, by exploring new application scenarios and developing novel methodological contributions.Our first contribution is an extension of the graph-based deformable registration framework, dealing with the challenging slice-to-volume registration problem. Slice-to-volume registration aims at registering a 2D image within a 3D volume, i.e. we seek a mapping function which optimally maps a tomographic slice to the 3D coordinate space of a given volume. We introduce a scalable, modular and flexible formulation accommodating low-rank and high order terms, which simultaneously selects the plane and estimates the in-plane deformation through a single shot optimization approach. The proposed framework is instantiated into different variants based on different graph topology, label space definition and energy construction. Simulated and real-data in the context of ultrasound and magnetic resonance registration (where both framework instantiations as well as different optimization strategies are considered) demonstrate the potentials of our method.The other two contributions included in this thesis are related to how semantic information can be encompassed within the registration process (independently of the dimensionality of the images). Currently, most of the methods rely on a single metric function explaining the similarity between the source and target images. We argue that incorporating semantic information to guide the registration process will further improve the accuracy of the results, particularly in the presence of semantic labels making the registration a domain specific problem.We consider a first scenario where we are given a classifier inferring probability maps for different anatomical structures in the input images. Our method seeks to simultaneously register and segment a set of input images, incorporating this information within the energy formulation. The main idea is to use these estimated maps of semantic labels (provided by an arbitrary classifier) as a surrogate for unlabeled data, and combine them with population deformable registration to improve both alignment and segmentation.Our last contribution also aims at incorporating semantic information to the registration process, but in a different scenario. In this case, instead of supposing that we have pre-trained arbitrary classifiers at our disposal, we are given a set of accurate ground truth annotations for a variety of anatomical structures. We present a methodological contribution that aims at learning context specific matching criteria as an aggregation of standard similarity measures from the aforementioned annotated data, using an adapted version of the latent structured support vector machine (LSSVM) framework.Les méthodes de recalage d’images, qui ont pour but l’alignement de deux ou plusieurs images dans un même système de coordonnées, sont parmi les algorithmes les plus anciens et les plus utilisés en vision par ordinateur. Les méthodes de recalage servent à établir des correspondances entre des images (prises à des moments différents, par différents senseurs ou avec différentes perspectives), lesquelles ne sont pas évidentes pour l’œil humain. Un type particulier d’algorithme de recalage, connu comme « les méthodes de recalage déformables à l’aide de modèles graphiques » est devenu de plus en plus populaire ces dernières années, grâce à sa robustesse, sa scalabilité, son efficacité et sa simplicité théorique. La gamme des problèmes auxquels ce type d’algorithme peut être adapté est particulièrement vaste. Dans ce travail de thèse, nous proposons plusieurs extensions à la théorie de recalage déformable à l’aide de modèles graphiques, en explorant de nouvelles applications et en développant des contributions méthodologiques originales.Notre première contribution est une extension du cadre du recalage à l’aide de graphes, en abordant le problème très complexe du recalage d’une tranche avec un volume. Le recalage d’une tranche avec un volume est le recalage 2D dans un volume 3D, comme par exemple le mapping d’une tranche tomographique dans un système de coordonnées 3D d’un volume en particulier. Nos avons proposé une formulation scalable, modulaire et flexible pour accommoder des termes d'ordre élevé et de rang bas, qui peut sélectionner le plan et estimer la déformation dans le plan de manière simultanée par une seule approche d'optimisation. Le cadre proposé est instancié en différentes variantes, basés sur différentes topologies du graph, définitions de l'espace des étiquettes et constructions de l'énergie. Le potentiel de notre méthode a été démontré sur des données réelles ainsi que des données simulées dans le cadre d’une résonance magnétique d’ultrason (où le cadre d’installation et les stratégies d’optimisation ont été considérés).Les deux autres contributions inclues dans ce travail de thèse, sont liées au problème de l’intégration de l’information sémantique dans la procédure de recalage (indépendamment de la dimensionnalité des images). Actuellement, la plupart des méthodes comprennent une seule fonction métrique pour expliquer la similarité entre l’image source et l’image cible. Nous soutenons que l'intégration des informations sémantiques pour guider la procédure de recalage pourra encore améliorer la précision des résultats, en particulier en présence d'étiquettes sémantiques faisant du recalage un problème spécifique adapté à chaque domaine.Nous considérons un premier scénario en proposant un classificateur pour inférer des cartes de probabilité pour les différentes structures anatomiques dans les images d'entrée. Notre méthode vise à recaler et segmenter un ensemble d'images d'entrée simultanément, en intégrant cette information dans la formulation de l'énergie. L'idée principale est d'utiliser ces cartes estimées des étiquettes sémantiques (fournie par un classificateur arbitraire) comme un substitut pour les données non-étiquettées, et les combiner avec le recalage déformable pour améliorer l'alignement ainsi que la segmentation.Notre dernière contribution vise également à intégrer l'information sémantique pour la procédure de recalage, mais dans un scénario différent. Dans ce cas, au lieu de supposer que nous avons des classificateurs arbitraires pré-entraînés à notre disposition, nous considérons un ensemble d’annotations précis (vérité terrain) pour une variété de structures anatomiques. Nous présentons une contribution méthodologique qui vise à l'apprentissage des critères correspondants au contexte spécifique comme une agrégation des mesures de similarité standard à partir des données annotées, en utilisant une adaptation de l’algorithme « Latent Structured Support Vector Machine »

    Teleoperation of MRI-Compatible Robots with Hybrid Actuation and Haptic Feedback

    Get PDF
    Image guided surgery (IGS), which has been developing fast recently, benefits significantly from the superior accuracy of robots and magnetic resonance imaging (MRI) which is a great soft tissue imaging modality. Teleoperation is especially desired in the MRI because of the highly constrained space inside the closed-bore MRI and the lack of haptic feedback with the fully autonomous robotic systems. It also very well maintains the human in the loop that significantly enhances safety. This dissertation describes the development of teleoperation approaches and implementation on an example system for MRI with details of different key components. The dissertation firstly describes the general teleoperation architecture with modular software and hardware components. The MRI-compatible robot controller, driving technology as well as the robot navigation and control software are introduced. As a crucial step to determine the robot location inside the MRI, two methods of registration and tracking are discussed. The first method utilizes the existing Z shaped fiducial frame design but with a newly developed multi-image registration method which has higher accuracy with a smaller fiducial frame. The second method is a new fiducial design with a cylindrical shaped frame which is especially suitable for registration and tracking for needles. Alongside, a single-image based algorithm is developed to not only reach higher accuracy but also run faster. In addition, performance enhanced fiducial frame is also studied by integrating self-resonant coils. A surgical master-slave teleoperation system for the application of percutaneous interventional procedures under continuous MRI guidance is presented. The slave robot is a piezoelectric-actuated needle insertion robot with fiber optic force sensor integrated. The master robot is a pneumatic-driven haptic device which not only controls the position of the slave robot, but also renders the force associated with needle placement interventions to the surgeon. Both of master and slave robots mechanical design, kinematics, force sensing and feedback technologies are discussed. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. MRI compatibility is evaluated extensively. Teleoperated needle steering is also demonstrated under live MR imaging. A control system of a clinical grade MRI-compatible parallel 4-DOF surgical manipulator for minimally invasive in-bore prostate percutaneous interventions through the patient’s perineum is discussed in the end. The proposed manipulator takes advantage of four sliders actuated by piezoelectric motors and incremental rotary encoders, which are compatible with the MRI environment. Two generations of optical limit switches are designed to provide better safety features for real clinical use. The performance of both generations of the limit switch is tested. MRI guided accuracy and MRI-compatibility of whole robotic system is also evaluated. Two clinical prostate biopsy cases have been conducted with this assistive robot

    Software and Hardware-based Tools for Improving Ultrasound Guided Prostate Brachytherapy

    Get PDF
    Minimally invasive procedures for prostate cancer diagnosis and treatment, including biopsy and brachytherapy, rely on medical imaging such as two-dimensional (2D) and three-dimensional (3D) transrectal ultrasound (TRUS) and magnetic resonance imaging (MRI) for critical tasks such as target definition and diagnosis, treatment guidance, and treatment planning. Use of these imaging modalities introduces challenges including time-consuming manual prostate segmentation, poor needle tip visualization, and variable MR-US cognitive fusion. The objective of this thesis was to develop, validate, and implement software- and hardware-based tools specifically designed for minimally invasive prostate cancer procedures to overcome these challenges. First, a deep learning-based automatic 3D TRUS prostate segmentation algorithm was developed and evaluated using a diverse dataset of clinical images acquired during prostate biopsy and brachytherapy procedures. The algorithm significantly outperformed state-of-the-art fully 3D CNNs trained using the same dataset while a segmentation time of 0.62 s demonstrated a significant reduction compared to manual segmentation. Next, the impact of dataset size, image quality, and image type on segmentation performance using this algorithm was examined. Using smaller training datasets, segmentation accuracy was shown to plateau with as little as 1000 training images, supporting the use of deep learning approaches even when data is scarce. The development of an image quality grading scale specific to 3D TRUS images will allow for easier comparison between algorithms trained using different datasets. Third, a power Doppler (PD) US-based needle tip localization method was developed and validated in both phantom and clinical cases, demonstrating reduced tip error and variation for obstructed needles compared to conventional US. Finally, a surface-based MRI-3D TRUS deformable image registration algorithm was developed and implemented clinically, demonstrating improved registration accuracy compared to manual rigid registration and reduced variation compared to the current clinical standard of physician cognitive fusion. These generalizable and easy-to-implement tools have the potential to improve workflow efficiency and accuracy for minimally invasive prostate procedures
    • …
    corecore