1,183 research outputs found

    Comparing Measured and Theoretical Target Registration Error of an Optical Tracking System

    Get PDF
    The goal of this thesis is to experimentally measure the accuracy of an optical tracking system used in commercial surgical navigation systems. We measure accuracy by constructing a mechanism that allows a tracked target to move with spherical motion (i.e., there exists a single point on the mechanism—the center of the sphere—that does not change position when the tracked target is moved). We imagine that the center of the sphere is the tip of a surgical tool rigidly attached to the tracked target. The location of the tool tip cannot be measured directly by the tracking system (because it is impossible to attach a tracking marker to the tool tip) and must be calculated using the measured location and orientation of the tracking target. Any measurement error in the tracking system will cause the calculated position of the tool tip to change as the target is moved; the spread of the calculated tool tip positions is a measurement of tracking error called the target registration error (TRE). The observed TRE will be compared to an analytic model of TRE to assess the predictions of the analytic model

    A mixed reality framework for surgical navigation: approach and preliminary results

    Get PDF
    The overarching purpose of this research is to understand whether Mixed Reality can enhance a surgeon’s manipulations skills during minimally invasive procedures. Minimally-invasive surgery (MIS) utilizes small cuts in the skin - or sometimes natural orifices - to deploy instruments inside a patient’s body, while a live video feed of the surgical site is provided by an endoscopic camera and displayed on a screen. MIS is associated with many benefits: small scars, less pain and shorter hospitalization time as compared to traditional open surgery. However, these benefits come at a cost: because surgeons have to work by looking at a monitor, and not down on their own hands, MIS disrupts their eye-hand coordination and makes even simple surgical maneuvers challenging to perform. In this study, we wish to use Mixed Reality technology to superimpose anatomical models over the surgical site and explore if it can be used to mitigate this problem

    Image-based registration methods for quantification and compensation of prostate motion during trans-rectal ultrasound (TRUS)-guided biopsy

    Get PDF
    Prostate biopsy is the clinical standard for cancer diagnosis and is typically performed under two-dimensional (2D) transrectal ultrasound (TRUS) for needle guidance. Unfortunately, most early stage prostate cancers are not visible on ultrasound and the procedure suffers from high false negative rates due to the lack of visible targets. Fusion of pre-biopsy MRI to 3D TRUS for targeted biopsy could improve cancer detection rates and volume of tumor sampled. In MRI-TRUS fusion biopsy systems, patient or prostate motion during the procedure causes misalignments in the MR targets mapped to the live 2D TRUS images, limiting the targeting accuracy of the biopsy system. In order to sample smallest clinically significant tumours of 0.5 cm3with 95% confidence, the root mean square (RMS) error of the biopsy system needs to be The target misalignments due to intermittent prostate motion during the procedure can be compensated by registering the live 2D TRUS images acquired during the biopsy procedure to the pre-acquired baseline 3D TRUS image. The registration must be performed both accurately and quickly in order to be useful during the clinical procedure. We developed an intensity-based 2D-3D rigid registration algorithm and validated it by calculating the target registration error (TRE) using manually identified fiducials within the prostate. We discuss two different approaches that can be used to improve the robustness of this registration to meet the clinical requirements. Firstly, we evaluated the impact of intra-procedural 3D TRUS imaging on motion compensation accuracy since the limited anatomical context available in live 2D TRUS images could limit the robustness of the 2D-3D registration. The results indicated that TRE improved when intra-procedural 3D TRUS images were used in registration, with larger improvements in the base and apex regions as compared with the mid-gland region. Secondly, we developed and evaluated a registration algorithm whose optimization is based on learned prostate motion characteristics. Compared to our initial approach, the updated optimization improved the robustness during 2D-3D registration by reducing the number of registrations with a TRE \u3e 5 mm from 9.2% to 1.2% with an overall RMS TRE of 2.3 mm. The methods developed in this work were intended to improve the needle targeting accuracy of 3D TRUS-guided biopsy systems. The successful integration of the techniques into current 3D TRUS-guided systems could improve the overall cancer detection rate during the biopsy and help to achieve earlier diagnosis and fewer repeat biopsy procedures in prostate cancer diagnosis

    Development of a Three-Dimensional Image-Guided Needle Positioning System for Small Animal Interventions

    Get PDF
    Conventional needle positioning techniques for small animal microinjections are fraught with issues of repeatability and targeting accuracy. To improve the outcomes of these interventions a small animal needle positioning system guided by micro-computed tomography (micro-CT) imaging was developed. A phantom was developed to calibrate the geometric accuracy of micro-CT scanners to a traceable standard of measurement. Use of the phantom ensures the geometric fidelity of micro-CT images for use in image-guided interventions or other demanding quantitative applications. The design of a robot is described which features a remote center of motion architecture and is compact enough to operate within a micro-CT bore. Methods to calibrate the robot and register it to a micro-CT scanner are introduced. The performance of the robot is characterized and a mean targeting accuracy of 149 ± 41 ”m estimated. The robot is finally demonstrated by completing an in vivo biomedical application

    Augmented Image-Guidance for Transcatheter Aortic Valve Implantation

    Get PDF
    The introduction of transcatheter aortic valve implantation (TAVI), an innovative stent-based technique for delivery of a bioprosthetic valve, has resulted in a paradigm shift in treatment options for elderly patients with aortic stenosis. While there have been major advancements in valve design and access routes, TAVI still relies largely on single-plane fluoroscopy for intraoperative navigation and guidance, which provides only gross imaging of anatomical structures. Inadequate imaging leading to suboptimal valve positioning contributes to many of the early complications experienced by TAVI patients, including valve embolism, coronary ostia obstruction, paravalvular leak, heart block, and secondary nephrotoxicity from contrast use. A potential method of providing improved image-guidance for TAVI is to combine the information derived from intra-operative fluoroscopy and TEE with pre-operative CT data. This would allow the 3D anatomy of the aortic root to be visualized along with real-time information about valve and prosthesis motion. The combined information can be visualized as a `merged\u27 image where the different imaging modalities are overlaid upon each other, or as an `augmented\u27 image, where the location of key target features identified on one image are displayed on a different imaging modality. This research develops image registration techniques to bring fluoroscopy, TEE, and CT models into a common coordinate frame with an image processing workflow that is compatible with the TAVI procedure. The techniques are designed to be fast enough to allow for real-time image fusion and visualization during the procedure, with an intra-procedural set-up requiring only a few minutes. TEE to fluoroscopy registration was achieved using a single-perspective TEE probe pose estimation technique. The alignment of CT and TEE images was achieved using custom-designed algorithms to extract aortic root contours from XPlane TEE images, and matching the shape of these contours to a CT-derived surface model. Registration accuracy was assessed on porcine and human images by identifying targets (such as guidewires or coronary ostia) on the different imaging modalities and measuring the correspondence of these targets after registration. The merged images demonstrated good visual alignment of aortic root structures, and quantitative assessment measured an accuracy of less than 1.5mm error for TEE-fluoroscopy registration and less than 6mm error for CT-TEE registration. These results suggest that the image processing techniques presented have potential for development into a clinical tool to guide TAVI. Such a tool could potentially reduce TAVI complications, reducing morbidity and mortality and allowing for a safer procedure

    3D fusion of histology to multi-parametric MRI for prostate cancer imaging evaluation and lesion-targeted treatment planning

    Get PDF
    Multi-parametric magnetic resonance imaging (mpMRI) of localized prostate cancer has the potential to support detection, staging and localization of tumors, as well as selection, delivery and monitoring of treatments. Delineating prostate cancer tumors on imaging could potentially further support the clinical workflow by enabling precise monitoring of tumor burden in active-surveillance patients, optimized targeting of image-guided biopsies, and targeted delivery of treatments to decrease morbidity and improve outcomes. Evaluating the performance of mpMRI for prostate cancer imaging and delineation ideally includes comparison to an accurately registered reference standard, such as prostatectomy histology, for the locations of tumor boundaries on mpMRI. There are key gaps in knowledge regarding how to accurately register histological reference standards to imaging, and consequently further gaps in knowledge regarding the suitability of mpMRI for tasks, such as tumor delineation, that require such reference standards for evaluation. To obtain an understanding of the magnitude of the mpMRI-histology registration problem, we quantified the position, orientation and deformation of whole-mount histology sections relative to the formalin-fixed tissue slices from which they were cut. We found that (1) modeling isotropic scaling accounted for the majority of the deformation with a further small but statistically significant improvement from modeling affine transformation, and (2) due to the depth (mean±standard deviation (SD) 1.1±0.4 mm) and orientation (mean±SD 1.5±0.9°) of the sectioning, the assumption that histology sections are cut from the front faces of tissue slices, common in previous approaches, introduced a mean error of 0.7 mm. To determine the potential consequences of seemingly small registration errors such as described above, we investigated the impact of registration accuracy on the statistical power of imaging validation studies using a co-registered spatial reference standard (e.g. histology images) by deriving novel statistical power formulae that incorporate registration error. We illustrated, through a case study modeled on a prostate cancer imaging trial at our centre, that submillimeter differences in registration error can have a substantial impact on the required sample sizes (and therefore also the study cost) for studies aiming to detect mpMRI signal differences due to 0.5 – 2.0 cm3 prostate tumors. With the aim of achieving highly accurate mpMRI-histology registrations without disrupting the clinical pathology workflow, we developed a three-stage method for accurately registering 2D whole-mount histology images to pre-prostatectomy mpMRI that allowed flexible placement of cuts during slicing for pathology and avoided the assumption that histology sections are cut from the front faces of tissue slices. The method comprised a 3D reconstruction of histology images, followed by 3D–3D ex vivo–in vivo and in vivo–in vivo image transformations. The 3D reconstruction method minimized fiducial registration error between cross-sections of non-disruptive histology- and ex-vivo-MRI-visible strand-shaped fiducials to reconstruct histology images into the coordinate system of an ex vivo MR image. We quantified the mean±standard deviation target registration error of the reconstruction to be 0.7±0.4 mm, based on the post-reconstruction misalignment of intrinsic landmark pairs. We also compared our fiducial-based reconstruction to an alternative reconstruction based on mutual-information-based registration, an established method for multi-modality registration. We found that the mean target registration error for the fiducial-based method (0.7 mm) was lower than that for the mutual-information-based method (1.2 mm), and that the mutual-information-based method was less robust to initialization error due to multiple sources of error, including the optimizer and the mutual information similarity metric. The second stage of the histology–mpMRI registration used interactively defined 3D–3D deformable thin-plate-spline transformations to align ex vivo to in vivo MR images to compensate for deformation due to endorectal MR coil positioning, surgical resection and formalin fixation. The third stage used interactively defined 3D–3D rigid or thin-plate-spline transformations to co-register in vivo mpMRI images to compensate for patient motion and image distortion. The combined mean registration error of the histology–mpMRI registration was quantified to be 2 mm using manually identified intrinsic landmark pairs. Our data set, comprising mpMRI, target volumes contoured by four observers and co-registered contoured and graded histology images, was used to quantify the positive predictive values and variability of observer scoring of lesions following the Prostate Imaging Reporting and Data System (PI-RADS) guidelines, the variability of target volume contouring, and appropriate expansion margins from target volumes to achieve coverage of histologically defined cancer. The analysis of lesion scoring showed that a PI-RADS overall cancer likelihood of 5, denoting “highly likely cancer”, had a positive predictive value of 85% for Gleason 7 cancer (and 93% for lesions with volumes \u3e0.5 cm3 measured on mpMRI) and that PI-RADS scores were positively correlated with histological grade (ρ=0.6). However, the analysis also showed interobserver differences in PI-RADS score of 0.6 to 1.2 (on a 5-point scale) and an agreement kappa value of only 0.30. The analysis of target volume contouring showed that target volume contours with suitable margins can achieve near-complete histological coverage for detected lesions, despite the presence of high interobserver spatial variability in target volumes. Prostate cancer imaging and delineation have the potential to support multiple stages in the management of localized prostate cancer. Targeted biopsy procedures with optimized targeting based on tumor delineation may help distinguish patients who need treatment from those who need active surveillance. Ongoing monitoring of tumor burden based on delineation in patients undergoing active surveillance may help identify those who need to progress to therapy early while the cancer is still curable. Preferentially targeting therapies at delineated target volumes may lower the morbidity associated with aggressive cancer treatment and improve outcomes in low-intermediate-risk patients. Measurements of the accuracy and variability of lesion scoring and target volume contouring on mpMRI will clarify its value in supporting these roles

    Robotic-assisted internal fixation of hip fractures: a fluoroscopy-based intraoperative registration technique

    Get PDF
    The internal fixation of proximal femoral (hip) fractures is the most frequently performed orthopaedic surgery procedure. When using a sliding compression hip screw, a commonly used fixation device, accurate positioning of the device within the femoral neck-head is achieved by initially drilling a pilot hole. A cannulated component of the hip screw is then inserted over the guide wire (surgical drill bit), which is used to drill the pilot hole. However, in practice, this fluoroscopically controlled drilling process is severely complicated by a depth perception problem and, as such, a surgeon can require several attempts to achieve a satisfactory guide wire placement. A prototype robotic-assisted orthopaedic surgery system has therefore been developed, with a view to achieving accurate right-first-time guide wire insertions. This paper describes the non-invasive digital X-ray photogrammetry-based registration technique which supports the proposed robotic-assisted drilling scenario. Results from preliminary laboratory (in vitro) trials employing this registration technique indicate that the cumulative error associated with the entire X-ray guided robotic system is within acceptable limits for the guide wire insertion process

    Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction

    Full text link
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.This work was carried out with the support of Information Storage S.L., University of Valencia (grant #CPI-15-170), CSD2007-00042 Consolider Ingenio CPAN (grant #CPAN13-TR01) as well as with the support of the Spanish Ministry of Industry, Energy and Tourism (Grant TSI-100101-2013-019).Albiol Colomer, F.; Corbi, A.; Albiol Colomer, A. (2016). Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction. IEEE Transactions on Medical Imaging. 35(8):1952-1961. https://doi.org/10.1109/TMI.2016.2540929S1952196135

    Transformation in orbital reconstruction

    Get PDF
    The orbits provide support and protection for several soft-tissue structures associated with vision. A fracture of the orbital walls may affect globe position and vision, and surgical intervention may be indicated to restore the bony anatomy and alleviate sequalae. Reconstruction is challenging due to the orbit’s complex shape and limited overview. Computer-assisted surgery (CAS) minimizes surgical risk and optimizes the reconstruction result. An overview of current CAS approaches and the results of validation studies that show its beneficial effect is provided in the Introduction Part. The aim of this thesis is to improve the current CAS workflow. The first innovation is the introduction of the Orbital Implant Positioning Frame (OIPF), which provides three-dimensional (3D) assessment of rotation and translation parameters of implant position for postoperative evaluation. The Navigation Chapters describe the OIPF’s use intraoperatively. Real-time, intuitive implant positioning feedback may be provided through the combination of the OIPF, an insertion instrument (TOP) and surgical navigation. This feedback improves implant positioning and reduces operating time. The use of the instrument without navigation also proves to have a beneficial effect on positioning accuracy. In the Registration Chapters, two novel registration workflows for craniomaxillofacial surgical navigation are introduced: virtual splint registration and registration-free navigation. The accuracy of virtual splint registration proved comparable to bone-anchored fiducial, while invasiveness and radiation exposure were reduced. The results for registration-free navigation are contradictory: it was the most accurate method with electromagnetic tracking, but the least accurate with optical tracking. The Revision Chapters concern secondary posttraumatic reconstruction with patient-specific implants (PSIs). In a cohort study, design options for PSIs are provided and the clinical results evaluated. Globe position significantly improved and double vision significantly reduced after secondary PSI reconstruction. A novel surgical workflow with the use of PSIs, specifically for secondary orbitozygomatic reconstruction, is described in the final Chapter. This ‘Orbit First’ method allows accurate reconstruction of the orbit independent of the obtained zygoma position
    • 

    corecore