1,893 research outputs found

    Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning

    Full text link
    Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation dose to both patient and staff. This work proposes a marker-free "technician-in-the-loop" Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a particular target view, the recorded C-arm pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. We conduct experiments in a setting simulating orthopedic trauma surgery. Our proof-of-principle findings indicate that the proposed system can decrease the 2.76 X-ray images required per desired view down to zero, suggesting substantial reductions of radiation dose during C-arm repositioning. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future. The concept of technician-in-the-loop design will become relevant to various interventions considering the expected advancements of sensing and wearable computing in the near future

    Computer- and robot-assisted Medical Intervention

    Full text link
    Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00

    Semi-robotic 6 degree of freedom positioning for intracranial high precision radiotherapy; first phantom and clinical results

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To introduce a novel method of patient positioning for high precision intracranial radiotherapy.</p> <p>Methods</p> <p>An infrared(IR)-array, reproducibly attached to the patient via a vacuum-mouthpiece(vMP) and connected to the table via a 6 degree-of-freedom(DoF) mechanical arm serves as positioning and fixation system. After IR-based manual prepositioning to rough treatment position and fixation of the mechanical arm, a cone-beam CT(CBCT) is performed. A robotic 6 DoF treatment couch (HexaPOD™) then automatically corrects all remaining translations and rotations. This absolute position of infrared markers at the first fraction acts as reference for the following fractions where patients are manually prepositioned to within ± 2 mm and ± 2° of this IR reference position prior to final HexaPOD-based correction; consequently CBCT imaging is only required once at the first treatment fraction.</p> <p>The preclinical feasibility and attainable repositioning accuracy of this method was evaluated on a phantom and human volunteers as was the clinical efficacy on 7 pilot study patients.</p> <p>Results</p> <p>Phantom and volunteer manual IR-based prepositioning to within ± 2 mm and ± 2° in 6DoF was possible within a mean(± SD) of 90 ± 31 and 56 ± 22 seconds respectively. Mean phantom translational and rotational precision after 6 DoF corrections by the HexaPOD was 0.2 ± 0.2 mm and 0.7 ± 0.8° respectively. For the actual patient collective, the mean 3D vector for inter-treatment repositioning accuracy (n = 102) was 1.6 ± 0.8 mm while intra-fraction movement (n = 110) was 0.6 ± 0.4 mm.</p> <p>Conclusions</p> <p>This novel semi-automatic 6DoF IR-based system has been shown to compare favourably with existing non-invasive intracranial repeat fixation systems with respect to handling, reproducibility and, more importantly, intra-fraction rigidity. Some advantages are full cranial positioning flexibility for single and fractionated IGRT treatments and possibly increased patient comfort.</p

    SYSTEM INTEGRATION OF C-ARM ROBOTIC PROTOTYPE USING MOTION CAPTURE GUIDANCE FOR ACCURATE REPOSITIONING

    Get PDF
    One of the important surgical tools in spinal surgery is the C-Arm X-ray System. The C-Arm is a large “C” shaped and manually maneuvered arm that provides surgeons and X-ray technicians the ability to take quick quality X-rays during surgery. Because of its five degrees of freedom, the C-Arm can be manually maneuvered around the patient to provide many angles and perspectives, ensuring surgical success. This system works fine for most surgical procedures but falls short when the C-Arm must be moved out of the way for complicated surgical procedures. The aim of this thesis is to develop an accurate repositioning method with the use of motion capture technology. This will be a novel approach to creating a repositioning integrated system. To develop a motion capture repositioning integrated system, a set of research tasks needed to be completed. A virtual prototype and a virtual platform were developed that quantified the dynamics of the C-Arm maneuvering. Next, a complete kinematic model of the C-Arm was developed. Third, a fully automatic robotic C-Arm prototype was designed and manufactured to serve as a replacement for the actual C-Arm. Finally, the robotic prototype, the virtual platform, and the kinematic model were all systematically integrated using Vicon motion capture system to perform the automatic repositioning of the C-Arm. Testing of the newly developed repositioning system was completed with successful results

    Acute Angle Repositioning in Mobile C-Arm Using Image Processing and Deep Learning

    Get PDF
    During surgery, medical practitioners rely on the mobile C-Arm medical x-ray system (C-Arm) and its fluoroscopic functions to not only perform the surgery but also validate the outcome. Currently, technicians reposition the C-Arm arbitrarily through estimation and guesswork. In cases when the positioning and repositioning of the C-Arm are critical for surgical assessment, uncertainties in the angular position of the C-Arm components hinder surgical performance. This thesis proposes an integrated approach to automatically reposition C-Arms during critically acute movements in orthopedic surgery. Robot vision and control with deep learning are used to determine the necessary angles of rotation for desired C-Arm repositioning. More specifically, a convolutional neural network is trained to detect and classify internal bodily structures. Image generation using the fast Fourier transform and Monte Carlo simulation is included to improve the robustness of the training progression of the neural network. Matching control points between a reference x-ray image and a test x-ray image allows for the determination of the projective transformation relating the images. From the projective transformation matrix, the tilt and orbital angles of rotation of the C-Arm are calculated. Key results indicate that the proposed method is successful in repositioning mobile C-Arms to a desired position within 8.9% error for the tilt and 3.5% error for the orbit. As a result, the guesswork entailed in fine C-Arm repositioning is replaced by a better, more refined method. Ultimately, confidence in C-Arm positioning and repositioning is reinforced, and surgical performance with the C-Arm is improved

    Virtual Reality Aided Mobile C-arm Positioning for Image-Guided Surgery

    Get PDF
    Image-guided surgery (IGS) is the minimally invasive procedure based on the pre-operative volume in conjunction with intra-operative X-ray images which are commonly captured by mobile C-arms for the confirmation of surgical outcomes. Although currently some commercial navigation systems are employed, one critical issue of such systems is the neglect regarding the radiation exposure to the patient and surgeons. In practice, when one surgical stage is finished, several X-ray images have to be acquired repeatedly by the mobile C-arm to obtain the desired image. Excessive radiation exposure may increase the risk of some complications. Therefore, it is necessary to develop a positioning system for mobile C-arms, and achieve one-time imaging to avoid the additional radiation exposure. In this dissertation, a mobile C-arm positioning system is proposed with the aid of virtual reality (VR). The surface model of patient is reconstructed by a camera mounted on the mobile C-arm. A novel registration method is proposed to align this model and pre-operative volume based on a tracker, so that surgeons can visualize the hidden anatomy directly from the outside view and determine a reference pose of C-arm. Considering the congested operating room, the C-arm is modeled as manipulator with a movable base to maneuver the image intensifier to the desired pose. In the registration procedure above, intensity-based 2D/3D registration is used to transform the pre-operative volume into the coordinate system of tracker. Although it provides a high accuracy, the small capture range hinders its clinical use due to the initial guess. To address such problem, a robust and fast initialization method is proposed based on the automatic tracking based initialization and multi-resolution estimation in frequency domain. This hardware-software integrated approach provides almost optimal transformation parameters for intensity-based registration. To determine the pose of mobile C-arm, high-quality visualization is necessary to locate the pathology in the hidden anatomy. A novel dimensionality reduction method based on sparse representation is proposed for the design of multi-dimensional transfer function in direct volume rendering. It not only achieves the similar performance to the conventional methods, but also owns the capability to deal with the large data sets

    Image-guided surgery and medical robotics in the cranial area

    Get PDF
    Surgery in the cranial area includes complex anatomic situations with high-risk structures and high demands for functional and aesthetic results. Conventional surgery requires that the surgeon transfers complex anatomic and surgical planning information, using spatial sense and experience. The surgical procedure depends entirely on the manual skills of the operator. The development of image-guided surgery provides new revolutionary opportunities by integrating presurgical 3D imaging and intraoperative manipulation. Augmented reality, mechatronic surgical tools, and medical robotics may continue to progress in surgical instrumentation, and ultimately, surgical care. The aim of this article is to review and discuss state-of-the-art surgical navigation and medical robotics, image-to-patient registration, aspects of accuracy, and clinical applications for surgery in the cranial area

    Optimierte Planung und bildgeführte Applikation der intensitätsmodulierten Strahlentherapie

    Get PDF
    corecore