3 research outputs found

    Detection and 3D Localization of Surgical Instruments for Image-Guided Surgery

    Get PDF
    Placement of surgical instrumentation in pelvic trauma surgery is challenged by complex anatomy and narrow bone corridors, relying on intraoperative x-ray fluoroscopy for visualization and guidance. The rapid workflow and cost constraints of orthopaedic trauma surgery have largely prohibited widespread adoption of 3D surgical navigation. This thesis reports the development and evaluation of a method to achieve 3D guidance via automatic detection and localization of surgical instruments (specifically, Kirschner wires [K-wires]) in fluoroscopic images acquired within routine workflow. The detection method uses a neural network (Mask R-CNN) for segmentation and keypoint detection of K-wires in fluoroscopy, and correspondence of keypoints among multiple images is established by 3D backprojection and a rank-ordering of ray intersections. The accuracy of 3D K-wire localization was evaluated in a laboratory cadaver study as well as patient images drawn from an IRB-approved clinical study. The detection network successfully generalized from simulated training and validation images to cadaver and clinical images, achieving 87% recall and 98% precision. The geometric accuracy of K-wire tip location and direction in 2D fluoroscopy was 1.9 ± 1.6 mm and 1.8° ± 1.3°, respectively. Simulation studies demonstrated a corresponding mean error of 1.1 mm in 3D tip location and 2.3° in 3D direction. Cadaver and clinical studies demonstrated the feasibility of the approach in real data, although accuracy was reduced to with 1.7 ± 0.7 mm in 3D tip location and 6° ± 2° in 3D direction. Future studies aim to improve performance by increasing the volume and variety of images used in neural network training, particularly with respect to low-dose fluoroscopy (high noise levels) and complex fluoroscopic scenes with various types surgical instrumentation. Because the approach involves fast runtime and uses equipment (a mobile C-arm) and fluoroscopic images that are common in standard workflow, it may be suitable to broad utilization in orthopaedic trauma surgery

    Acute Angle Repositioning in Mobile C-Arm Using Image Processing and Deep Learning

    Get PDF
    During surgery, medical practitioners rely on the mobile C-Arm medical x-ray system (C-Arm) and its fluoroscopic functions to not only perform the surgery but also validate the outcome. Currently, technicians reposition the C-Arm arbitrarily through estimation and guesswork. In cases when the positioning and repositioning of the C-Arm are critical for surgical assessment, uncertainties in the angular position of the C-Arm components hinder surgical performance. This thesis proposes an integrated approach to automatically reposition C-Arms during critically acute movements in orthopedic surgery. Robot vision and control with deep learning are used to determine the necessary angles of rotation for desired C-Arm repositioning. More specifically, a convolutional neural network is trained to detect and classify internal bodily structures. Image generation using the fast Fourier transform and Monte Carlo simulation is included to improve the robustness of the training progression of the neural network. Matching control points between a reference x-ray image and a test x-ray image allows for the determination of the projective transformation relating the images. From the projective transformation matrix, the tilt and orbital angles of rotation of the C-Arm are calculated. Key results indicate that the proposed method is successful in repositioning mobile C-Arms to a desired position within 8.9% error for the tilt and 3.5% error for the orbit. As a result, the guesswork entailed in fine C-Arm repositioning is replaced by a better, more refined method. Ultimately, confidence in C-Arm positioning and repositioning is reinforced, and surgical performance with the C-Arm is improved
    corecore