6 research outputs found

    Registration accuracy for MR images of the prostate using a subvolume based registration protocol

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In recent years, there has been a considerable research effort concerning the integration of magnetic resonance imaging (MRI) into the external radiotherapy workflow motivated by the superior soft tissue contrast as compared to computed tomography. Image registration is a necessary step in many applications, e.g. in patient positioning and therapy response assessment with repeated imaging. In this study, we investigate the dependence between the registration accuracy and the size of the registration volume for a subvolume based rigid registration protocol for MR images of the prostate.</p> <p>Methods</p> <p>Ten patients were imaged four times each over the course of radiotherapy treatment using a T2 weighted sequence. The images were registered to each other using a mean square distance metric and a step gradient optimizer for registration volumes of different sizes. The precision of the registrations was evaluated using the center of mass distance between the manually defined prostates in the registered images. The optimal size of the registration volume was determined by minimizing the standard deviation of these distances.</p> <p>Results</p> <p>We found that prostate position was most uncertain in the anterior-posterior (AP) direction using traditional full volume registration. The improvement in standard deviation of the mean center of mass distance between the prostate volumes using a registration volume optimized to the prostate was 3.9 mm (p < 0.001) in the AP direction. The optimum registration volume size was 0 mm margin added to the prostate gland as outlined in the first image series.</p> <p>Conclusions</p> <p>Repeated MR imaging of the prostate for therapy set-up or therapy assessment will both require high precision tissue registration. With a subvolume based registration the prostate registration uncertainty can be reduced down to the order of 1 mm (1 SD) compared to several millimeters for registration based on the whole pelvis.</p

    A surface-based approach to determine key spatial parameters of the acetabulum in a standardized pelvic coordinate system

    Get PDF
    Accurately determining the spatial relationship between the pelvis and acetabulum is challenging due to their inherently complex three-dimensional (3D) anatomy. A standardized 3D pelvic coordinate system (PCS) and the precise assessment of acetabular orientation would enable the relationship to be determined. We present a surface-based method to establish a reliable PCS and develop software for semi-automatic measurement of acetabular spatial parameters. Vertices on the acetabular rim were manually extracted as an eigenpoint set after 3D models were imported into the software. A reliable PCS consisting of the anterior pelvic plane, midsagittal pelvic plane, and transverse pelvic plane was then computed by iteration on mesh data. A spatial circle was fitted as a succinct description of the acetabular rim. Finally, a series of mutual spatial parameters between the pelvis and acetabulum were determined semi-automatically, including the center of rotation, radius, and acetabular orientation. Pelvic models were reconstructed based on high-resolution computed tomography images. Inter- and intra-rater correlations for measurements of mutual spatial parameters were almost perfect, showing our method affords very reproducible measurements. The approach will thus be useful for analyzing anatomic data and has potential applications for preoperative planning in individuals receiving total hip arthroplasty

    Individualised Modelling for Preoperative Planning of Total Knee Replacement Surgery

    Get PDF
    Total knee replacement (TKR) surgery is routinely prescribed for patients with severe knee osteoarthritis to alleviate the pain and restore the kinematics. Although this procedure was proven to be successful in reducing the joint pain, the number of failures and the low patients’ satisfaction suggest that while the number of reoperations is small, the surgery frequently fail to restore the function in full. The main cause are surgical techniques which inadequately address the problem of balancing the knee soft tissues. The preoperative planning technique allows to manufacture subject-specific cutting guides that improves the placement of the prosthesis, however the knee soft tissue is ignored. The objective of this dissertation was to create an optimized preplanning procedure to compute the soft tissue balance along with the placement of the prosthesis to ensure mechanical stability. The dissertation comprises the development of CT based static and quasi-static knee models able to estimate the postoperative length of the collateral lateral ligaments using a dataset of seven TKR patients; In addition, a subject-specific dynamic musculoskeletal model of the lower limb was created using in vivo knee contact forces to perform the same analysis during walking. The models were evaluated by their ability to predict the postoperative elongation using a threshold based on the 10 % of the preoperative length, through which the model detected whether an elongation was acceptable. The results showed that the subject-specific static model is the best solution to be included in the optimized, subject-specific, preoperative planning framework; full order musculoskeletal model allowed to estimate the postoperative length of the ligaments during walking, and at least in principle while performing any other activity. Unlike the current methodology used in clinic this optimized preoperative planning framework might help the surgeon to understand how the position of the TKR affects the knee soft tissue

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures
    corecore