389 research outputs found

    3D Rigid Registration of Intraoperative Ultrasound and Preoperative MR Brain Images Based on Hyperechogenic Structures

    Get PDF
    The registration of intraoperative ultrasound (US) images with preoperative magnetic resonance (MR) images is a challenging problem due to the difference of information contained in each image modality. To overcome this difficulty, we introduce a new probabilistic function based on the matching of cerebral hyperechogenic structures. In brain imaging, these structures are the liquid interfaces such as the cerebral falx and the sulci, and the lesions when the corresponding tissue is hyperechogenic. The registration procedure is achieved by maximizing the joint probability for a voxel to be included in hyperechogenic structures in both modalities. Experiments were carried out on real datasets acquired during neurosurgical procedures. The proposed validation framework is based on (i) visual assessment, (ii) manual expert estimations , and (iii) a robustness study. Results show that the proposed method (i) is visually efficient, (ii) produces no statistically different registration accuracy compared to manual-based expert registration, and (iii) converges robustly. Finally, the computation time required by our method is compatible with intraoperative use

    Image guidance in neurosurgical procedures, the "Visages" point of view.

    Get PDF
    This paper gives an overview of the evolution of clinical neuroinformatics in the domain of neurosurgery. It shows how image guided neurosurgery (IGNS) is evolving according to the integration of new imaging modalities before, during and after the surgical procedure and how this acts as the premise of the Operative Room of the future. These different issues, as addressed by the VisAGeS INRIA/INSERM U746 research team (http://www.irisa.fr/visages), are presented and discussed in order to exhibit the benefits of an integrated work between physicians (radiologists, neurologists and neurosurgeons) and computer scientists to give adequate answers toward a more effective use of images in IGNS

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    Towards efficient neurosurgery: Image analysis for interventional MRI

    Get PDF
    Interventional magnetic resonance imaging (iMRI) is being increasingly used for performing imageguided neurosurgical procedures. Intermittent imaging through iMRI can help a neurosurgeon visualise the target and eloquent brain areas during neurosurgery and lead to better patient outcome. MRI plays an important role in planning and performing neurosurgical procedures because it can provide highresolution anatomical images that can be used to discriminate between healthy and diseased tissue, as well as identify location and extent of functional areas. This is of significant clinical utility as it helps the surgeons maximise target resection and avoid damage to functionally important brain areas. There is clinical interest in propagating the pre-operative surgical information to the intra-operative image space as this allows the surgeons to utilise the pre-operatively generated surgical plans during surgery. The current state of the art neuronavigation systems achieve this by performing rigid registration of pre-operative and intra-operative images. As the brain undergoes non-linear deformations after craniotomy (brain shift), the rigidly registered pre-operative images do not accurately align anymore with the intra-operative images acquired during surgery. This limits the accuracy of these neuronavigation systems and hampers the surgeon’s ability to perform more aggressive interventions. In addition, intra-operative images are typically of lower quality with susceptibility artefacts inducing severe geometric and intensity distortions around areas of resection in echo planar MRI images, significantly reducing their utility in the intraoperative setting. This thesis focuses on development of novel methods for an image processing workflow that aims to maximise the utility of iMRI in neurosurgery. I present a fast, non-rigid registration algorithm that can leverage information from both structural and diffusion weighted MRI images to localise target lesions and a critical white matter tract, the optic radiation, during surgical management of temporal lobe epilepsy. A novel method for correcting susceptibility artefacts in echo planar MRI images is also developed, which combines fieldmap and image registration based correction techniques. The work developed in this thesis has been validated and successfully integrated into the surgical workflow at the National Hospital for Neurology and Neurosurgery in London and is being clinically used to inform surgical decisions

    Advancing Intra-operative Precision: Dynamic Data-Driven Non-Rigid Registration for Enhanced Brain Tumor Resection in Image-Guided Neurosurgery

    Full text link
    During neurosurgery, medical images of the brain are used to locate tumors and critical structures, but brain tissue shifts make pre-operative images unreliable for accurate removal of tumors. Intra-operative imaging can track these deformations but is not a substitute for pre-operative data. To address this, we use Dynamic Data-Driven Non-Rigid Registration (NRR), a complex and time-consuming image processing operation that adjusts the pre-operative image data to account for intra-operative brain shift. Our review explores a specific NRR method for registering brain MRI during image-guided neurosurgery and examines various strategies for improving the accuracy and speed of the NRR method. We demonstrate that our implementation enables NRR results to be delivered within clinical time constraints while leveraging Distributed Computing and Machine Learning to enhance registration accuracy by identifying optimal parameters for the NRR method. Additionally, we highlight challenges associated with its use in the operating room

    Integrated navigation and visualisation for skull base surgery

    Get PDF
    Skull base surgery involves the management of tumours located on the underside of the brain and the base of the skull. Skull base tumours are intricately associated with several critical neurovascular structures making surgery challenging and high risk. Vestibular schwannoma (VS) is a benign nerve sheath tumour arising from one of the vestibular nerves and is the commonest pathology encountered in skull base surgery. The goal of modern VS surgery is maximal tumour removal whilst preserving neurological function and maintaining quality of life but despite advanced neurosurgical techniques, facial nerve paralysis remains a potentially devastating complication of this surgery. This thesis describes the development and integration of various advanced navigation and visualisation techniques to increase the precision and accuracy of skull base surgery. A novel Diffusion Magnetic Resonance Imaging (dMRI) acquisition and processing protocol for imaging the facial nerve in patients with VS was developed to improve delineation of facial nerve preoperatively. An automated Artificial Intelligence (AI)-based framework was developed to segment VS from MRI scans. A user-friendly navigation system capable of integrating dMRI and tractography of the facial nerve, 3D tumour segmentation and intraoperative 3D ultrasound was developed and validated using an anatomically-realistic acoustic phantom model of a head including the skull, brain and VS. The optical properties of five types of human brain tumour (meningioma, pituitary adenoma, schwannoma, low- and high-grade glioma) and nine different types of healthy brain tissue were examined across a wavelength spectrum of 400 nm to 800 nm in order to inform the development of an Intraoperative Hypserpectral Imaging (iHSI) system. Finally, functional and technical requirements of an iHSI were established and a prototype system was developed and tested in a first-in-patient study

    Usefulness of Intraoperative 2D-Ultrasound in the Resection of Brain Tumors

    Get PDF
    The surgical approach to brain tumors often uses preoperative images to visualize the characteristics of pathology, guiding the surgical procedure. However, the usefulness of preoperative images during the surgical procedure is altered by the changes in the brain during the surgery because of craniotomy, inflammation, tumor resection, cerebrospinal fluid (CSF) drainage, among others. For this reason, there is a need to use intraoperative imaging evaluation methods that allow the surgeon to consider these changes, reflecting the real-time anatomical disposition of the brain/tumor. Intraoperative ultrasound (iUS) has allowed neurosurgeons to guide the surgical procedure without exposing the patient to ionizing radiation or interrupting the procedure. Technological advances have made it possible to improve image quality, have smaller probes, and facilitate the use of the equipment, in addition to the introduction of new imaging modalities, such as three-dimensional images, enhanced with contrast, among others, expanding the available options. In the context of these advances, the objective of this chapter was to review the current status of the usefulness and challenges of iUS for brain tumor resection through an in-depth review of the literature and the discussion of an illustrative case

    Brain-shift compensation using intraoperative ultrasound and constraint-based biomechanical simulation

    Get PDF
    International audiencePurpose. During brain tumor surgery, planning and guidance are based on pre-operative images which do not account for brain-shift. However, this deformation is a major source of error in image-guided neurosurgery and affects the accuracy of the procedure. In this paper, we present a constraint-based biome-chanical simulation method to compensate for craniotomy-induced brain-shift that integrates the deformations of the blood vessels and cortical surface, using a single intraoperative ultrasound acquisition. Methods. Prior to surgery, a patient-specific biomechanical model is built from preoperative images, accounting for the vascular tree in the tumor region and brain soft tissues. Intraoperatively, a navigated ultrasound acquisition is performed directly in contact with the organ. Doppler and B-mode images are recorded simultaneously, enabling the extraction of the blood vessels and probe footprint respectively. A constraint-based simulation is then executed to register the pre-and intraoperative vascular trees as well as the cortical surface with the probe footprint. Finally, preoperative images are updated to provide the surgeon with images corresponding to the current brain shape for navigation. Results. The robustness of our method is first assessed using sparse and noisy synthetic data. In addition, quantitative results for five clinical cases are provided , first using landmarks set on blood vessels, then based on anatomical structures delineated in medical images. The average distances between paired vessels landmarks ranged from 3.51 to 7.32 (in mm) before compensation. With our method, on average 67% of the brain-shift is corrected (range [1.26; 2.33]) against 57% using one of the closest existing works (range [1.71; 2.84]). Finally, our method is proven to be fully compatible with a surgical workflow in terms of execution times and user interactions. Conclusion. In this paper, a new constraint-based biomechanical simulation method is proposed to compensate for craniotomy-induced brain-shift. While being efficient to correct this deformation, the method is fully integrable in a clinical process

    Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study

    Get PDF
    Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery. In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm. The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts
    • 

    corecore