501 research outputs found

    2D-3D registration of CT vertebra volume to fluoroscopy projection: A calibration model assessment (doi:10.1155/2010/806094)

    Get PDF
    This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1?mm for displacements parallel to the fluoroscopic plane, and of order of 10?mm for the orthogonal displacement.<br/

    Registration and Segmentation of Multimodality Images for Post Processing of Skeleton in Preclinical Oncology Studies

    Get PDF
    Advancements in medical imaging techniques provide biomedical researchers with quality anatomical and functional information inside preclinical subjects in the fields of cancer, osteopathic, cardiovascular, and neurodegenerative research. The throughput of the preclinical imaging studies is a critical factor which determines the pace of small animal medical research. The time involved in manual analysis of large amount of imaging data prior to data interpretation by the researcher, limits the number of studies in a time frame. In the proposed solution, an automated image segmentation method was used to segment individual vertebrae in mice. Individual vertebrae of MOBY atlas were manually segmented and registered to the CT data. The PET activity for L1-L5 vertebrae was measured by applying the CT registered atlas vertebrae ROI. The algorithm was tested on three datasets from a PET/CT bone metastasis study using 18F-NaF radiotracer. The algorithm was found to reduce the analysis time threefold with a potential to further reduce the automated analysis time by use of computer system with better specification to run the algorithm. The manual analysis value can vary each time the analysis is performed and is dependent on the individual performing the analysis. Also the error percent was recorded and showed an increasing trend as the analysis moves down the spine from skull to caudal vertebrae. This method can be applied to segment the rest of the bone in the CT data and act as the starting point for the registration of the soft tissues

    Fluoroscopic Navigation for Robot-Assisted Orthopedic Surgery

    Get PDF
    Robot-assisted orthopedic surgery has gained increasing attention due to its improved accuracy and stability in minimally-invasive interventions compared to a surgeon's manual operation. An effective navigation system is critical, which estimates the intra-operative tool-to-tissue pose relationship to guide the robotic surgical device. However, most existing navigation systems use fiducial markers, such as bone pin markers, to close the calibration loop, which requires a clear line of sight and is not ideal for patients. This dissertation presents fiducial-free, fluoroscopic image-based navigation pipelines for three robot-assisted orthopedic applications: femoroplasty, core decompression of the hip, and transforaminal lumbar epidural injections. We propose custom-designed image intensity-based 2D/3D registration algorithms for pose estimation of bone anatomies, including femur and spine, and pose estimation of a rigid surgical tool and a flexible continuum manipulator. We performed system calibration and integration into a surgical robotic platform. We validated the navigation system's performance in comprehensive simulation and ex vivo cadaveric experiments. Our results suggest the feasibility of applying our proposed navigation methods for robot-assisted orthopedic applications. We also investigated machine learning approaches that can benefit the medical imaging analysis, automate the navigation component or address the registration challenges. We present a synthetic X-ray data generation pipeline called SyntheX, which enables large-scale machine learning model training. SyntheX was used to train feature detection tasks of the pelvis anatomy and the continuum manipulator, which were used to initialize the registration pipelines. Last but not least, we propose a projective spatial transformer module that learns a convex shape similarity function and extends the registration capture range. We believe that our image-based navigation solutions can benefit and inspire related orthopedic robot-assisted system designs and eventually be used in the operating rooms to improve patient outcomes

    A biomechanical approach for real-time tracking of lung tumors during External Beam Radiation Therapy (EBRT)

    Get PDF
    Lung cancer is the most common cause of cancer related death in both men and women. Radiation therapy is widely used for lung cancer treatment. However, this method can be challenging due to respiratory motion. Motion modeling is a popular method for respiratory motion compensation, while biomechanics-based motion models are believed to be more robust and accurate as they are based on the physics of motion. In this study, we aim to develop a biomechanics-based lung tumor tracking algorithm which can be used during External Beam Radiation Therapy (EBRT). An accelerated lung biomechanical model can be used during EBRT only if its boundary conditions (BCs) are defined in a way that they can be updated in real-time. As such, we have developed a lung finite element (FE) model in conjunction with a Neural Networks (NNs) based method for predicting the BCs of the lung model from chest surface motion data. To develop the lung FE model for tumor motion prediction, thoracic 4D CT images of lung cancer patients were processed to capture the lung and diaphragm geometry, trans-pulmonary pressure, and diaphragm motion. Next, the chest surface motion was obtained through tracking the motion of the ribcage in 4D CT images. This was performed to simulate surface motion data that can be acquired using optical tracking systems. Finally, two feedforward NNs were developed, one for estimating the trans-pulmonary pressure and another for estimating the diaphragm motion from chest surface motion data. The algorithm development consists of four steps of: 1) Automatic segmentation of the lungs and diaphragm, 2) diaphragm motion modelling using Principal Component Analysis (PCA), 3) Developing the lung FE model, and 4) Using two NNs to estimate the trans-pulmonary pressure values and diaphragm motion from chest surface motion data. The results indicate that the Dice similarity coefficient between actual and simulated tumor volumes ranges from 0.76±0.04 to 0.91±0.01, which is favorable. As such, real-time lung tumor tracking during EBRT using the proposed algorithm is feasible. Hence, further clinical studies involving lung cancer patients to assess the algorithm performance are justified

    Augmented Reality Ultrasound Guidance in Anesthesiology

    Get PDF
    Real-time ultrasound has become a mainstay in many image-guided interventions and increasingly popular in several percutaneous procedures in anesthesiology. One of the main constraints of ultrasound-guided needle interventions is identifying and distinguishing the needle tip from needle shaft in the image. Augmented reality (AR) environments have been employed to address challenges surrounding surgical tool visualization, navigation, and positioning in many image-guided interventions. The motivation behind this work was to explore the feasibility and utility of such visualization techniques in anesthesiology to address some of the specific limitations of ultrasound-guided needle interventions. This thesis brings together the goals, guidelines, and best development practices of functional AR ultrasound image guidance (AR-UIG) systems, examines the general structure of such systems suitable for applications in anesthesiology, and provides a series of recommendations for their development. The main components of such systems, including ultrasound calibration and system interface design, as well as applications of AR-UIG systems for quantitative skill assessment, were also examined in this thesis. The effects of ultrasound image reconstruction techniques, as well as phantom material and geometry on ultrasound calibration, were investigated. Ultrasound calibration error was reduced by 10% with synthetic transmit aperture imaging compared with B-mode ultrasound. Phantom properties were shown to have a significant effect on calibration error, which is a variable based on ultrasound beamforming techniques. This finding has the potential to alter how calibration phantoms are designed cognizant of the ultrasound imaging technique. Performance of an AR-UIG guidance system tailored to central line insertions was evaluated in novice and expert user studies. While the system outperformed ultrasound-only guidance with novice users, it did not significantly affect the performance of experienced operators. Although the extensive experience of the users with ultrasound may have affected the results, certain aspects of the AR-UIG system contributed to the lackluster outcomes, which were analyzed via a thorough critique of the design decisions. The application of an AR-UIG system in quantitative skill assessment was investigated, and the first quantitative analysis of needle tip localization error in ultrasound in a simulated central line procedure, performed by experienced operators, is presented. Most participants did not closely follow the needle tip in ultrasound, resulting in 42% unsuccessful needle placements and a 33% complication rate. Compared to successful trials, unsuccessful procedures featured a significantly greater (p=0.04) needle-tip to image-plane distance. Professional experience with ultrasound does not necessarily lead to expert level performance. Along with deliberate practice, quantitative skill assessment may reinforce clinical best practices in ultrasound-guided needle insertions. Based on the development guidelines, an AR-UIG system was developed to address the challenges in ultrasound-guided epidural injections. For improved needle positioning, this system integrated A-mode ultrasound signal obtained from a transducer housed at the tip of the needle. Improved needle navigation was achieved via enhanced visualization of the needle in an AR environment, in which B-mode and A-mode ultrasound data were incorporated. The technical feasibility of the AR-UIG system was evaluated in a preliminary user study. The results suggested that the AR-UIG system has the potential to outperform ultrasound-only guidance

    IMPROVING DAILY CLINICAL PRACTICE WITH ABDOMINAL PATIENT SPECIFIC 3D MODELS

    Get PDF
    This thesis proposes methods and procedures to proficiently introduce patient 3D models in the daily clinical practice for diagnosis and treatment of abdominal diseases. The objective of the work consists in providing and visualizing quantitative geometrical and topological information on the anatomy of interest, and to develop systems that allow to improve radiology and surgery. The 3D visualization drastically simplifies the interpretation process of medical images and provides benefits both in diagnosing and in surgical planning phases. Further advantages can be introduced registering virtual pre-operative information (3D models) with real intra-operative information (patient and surgical instruments). The surgeon can use mixed-reality systems that allow him/her to see covered structures before reaching them, surgical navigators for see the scene (anatomy and instruments) from different point of view and smart mechatronics devices, which, knowing the anatomy, assist him/her in an active way. All these aspects are useful in terms of safety, efficiency and financial resources for the physicians, for the patient and for the sanitary system too. The entire process, from volumetric radiological images acquisition up to the use of 3D anatomical models inside the surgical room, has been studied and specific applications have been developed. A segmentation procedure has been designed taking into account acquisition protocols commonly used in radiological departments, and a software tool, that allows to obtain efficient 3D models, have been implemented and tested. The alignment problem has been investigated examining the various sources of errors during the image acquisition, in the radiological department, and during to the execution of the intervention. A rigid body registration procedure compatible with the surgical environment has been defined and implemented. The procedure has been integrated in a surgical navigation system and is useful as starting initial registration for more accurate alignment methods based on deformable approaches. Monoscopic and stereoscopic 3D localization machine vision routines, using the laparoscopic and/or generic cameras images, have been implemented to obtain intra-operative information that can be used to model abdominal deformations. Further, the use of this information for fusion and registration purposes allows to enhance the potentialities of computer assisted surgery. In particular a precise alignment between virtual and real anatomies for mixed-reality purposes, and the development of tracker-free navigation systems, has been obtained elaborating video images and providing an analytical adaptation of the virtual camera to the real camera. Clinical tests, demonstrating the usability of the proposed solutions, are reported. Test results and appreciation of radiologists and surgeons, to the proposed prototypes, encourage their integration in the daily clinical practice and future developments

    Design and clinical evaluation of an image-guided surgical microscope with an integrated tracking system

    Get PDF
    A new image-guided microscope system using augmented reality image overlays has been developed. With this system, CT cut-views and segmented objects such as tumors that have been previously extracted from preoperative tomographic images can be directly displayed as augmented reality overlays on the microscope image. The novelty of this design stems from the inclusion of a precise mini-tracker directly on the microscope. This device, which is rigidly mounted to the microscope, is used to track the movements of surgical tools and the patient. In addition to an accuracy gain, this setup offers improved ergonomics since it is much easier for the surgeon to keep an unobstructed line of sight to tracked objects. We describe the components of the system: microscope calibration, image registration, tracker assembly and registration, tool tracking, and augmented reality display. The accuracy of the system has been measured by validation on plastic skulls and cadaver heads, obtaining an overlay error of 0.7mm. In addition, a numerical simulation of the system has been done in order to complement the accuracy study, showing that the integration of the tracker onto the microscope could lead to an improvement of the accuracy to the order of 0.5mm. Finally, we describe our clinical experience using the system in the operation room, where three operations have been performed to dat
    corecore