205 research outputs found

    Framework for a low-cost intra-operative image-guided neuronavigator including brain shift compensation

    Full text link
    In this paper we present a methodology to address the problem of brain tissue deformation referred to as 'brain-shift'. This deformation occurs throughout a neurosurgery intervention and strongly alters the accuracy of the neuronavigation systems used to date in clinical routine which rely solely on pre-operative patient imaging to locate the surgical target, such as a tumour or a functional area. After a general description of the framework of our intra-operative image-guided system, we describe a procedure to generate patient specific finite element meshes of the brain and propose a biomechanical model which can take into account tissue deformations and surgical procedures that modify the brain structure, like tumour or tissue resection

    Intraoperative Imaging Modalities and Compensation for Brain Shift in Tumor Resection Surgery

    Get PDF
    Intraoperative brain shift during neurosurgical procedures is a well-known phenomenon caused by gravity, tissue manipulation, tumor size, loss of cerebrospinal fluid (CSF), and use of medication. For the use of image-guided systems, this phenomenon greatly affects the accuracy of the guidance. During the last several decades, researchers have investigated how to overcome this problem. The purpose of this paper is to present a review of publications concerning different aspects of intraoperative brain shift especially in a tumor resection surgery such as intraoperative imaging systems, quantification, measurement, modeling, and registration techniques. Clinical experience of using intraoperative imaging modalities, details about registration, and modeling methods in connection with brain shift in tumor resection surgery are the focuses of this review. In total, 126 papers regarding this topic are analyzed in a comprehensive summary and are categorized according to fourteen criteria. The result of the categorization is presented in an interactive web tool. The consequences from the categorization and trends in the future are discussed at the end of this work

    Non-rigid registration of serial intra-operative images for automatic brain shift estimation

    Get PDF
    Measurement of intra-operative brain motion is important to provide boundary conditions to physics-based deformation models that can be used to register pre- and intra-operative information. In this paper we present and test a technique that can be used to measure brain surface motion automatically. This method relies on a tracked laser range scanner (LRS) that can acquire simultaneously a picture and the 3D physical coordinates of objects within its field of view. This reduces the 3D tracking problem to a 2D non-rigid registration problem which we solve with a Mutual Information-based algorithm. Results obtained on images of a phantom and on images acquired intra-operatively that demonstrate the feasibility of the method are presented

    A surface registration approach for video-based analysis of intraoperative brain surface deformations.

    Get PDF
    Anatomical intra operative deformation is a major limitation of accuracy in image guided neurosurgery. Approaches to quantify these deforamations based on 3D reconstruction of surfaces have been introduced. For accurate quantification of surface deformation, a robust surface registration method is required. In this paper, we propose a new surface registration for video-based analysis of intraoperative brain deformations. This registration method includes three terms: the first term is related to image intensities, the second to Euclidean distance and the third to anatomical landmarks continuously tracked in 2D video. This new surface registration method can be used with any cortical surface textured point cloud computed by stereoscopic or laser range approaches. We have shown the global method, including textured point cloud reconstruction, had a precision within 2 millimeters, which is within the usual rigid registration error of the neuronavigation system before deformations

    A Feature-Driven Active Framework for Ultrasound-Based Brain Shift Compensation

    Full text link
    A reliable Ultrasound (US)-to-US registration method to compensate for brain shift would substantially improve Image-Guided Neurological Surgery. Developing such a registration method is very challenging, due to factors such as missing correspondence in images, the complexity of brain pathology and the demand for fast computation. We propose a novel feature-driven active framework. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the result. We retrospectively demonstrate our registration framework as a robust and accurate brain shift compensation solution on clinical data acquired during neurosurgery

    A method for the assessment of time-varying brain shift during navigated epilepsy surgery

    Get PDF
    Image guidance is widely used in neurosurgery. Tracking systems (neuronavigators) allow registering the preoperative image space to the surgical space. The localization accuracy is influenced by technical and clinical factors, such as brain shift. This paper aims at providing quantitative measure of the time-varying brain shift during open epilepsy surgery, and at measuring the pattern of brain deformation with respect to three potentially meaningful parameters: craniotomy area, craniotomy orientation and gravity vector direction in the images reference frame

    Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study

    Get PDF
    Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery. In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm. The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts

    Tracking and Mapping in Medical Computer Vision: A Review

    Full text link
    As computer vision algorithms are becoming more capable, their applications in clinical systems will become more pervasive. These applications include diagnostics such as colonoscopy and bronchoscopy, guiding biopsies and minimally invasive interventions and surgery, automating instrument motion and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing and applying algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. We then review datasets provided in the field and the clinical needs therein. Then, we delve in depth into the algorithmic side, and summarize recent developments, which should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. Finally, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications in the field. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.Comment: 31 pages, 17 figure
    • …
    corecore