12 research outputs found

    Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions

    Get PDF
    In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of Periacetabular Osteotomy of Ganz (PAO). A typical orthopaedic intervention involves cutting bone, anatomy manipulation and repositioning as well as implant placement. Our method includes a segmentation based deep learning approach for detection and quantification of the cuts. Furthermore, anatomy repositioning was quantified through a multi-step registration method, which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy. Implant (i.e., screw) position was identified by 3D Hough transform for line detection combined with fast voxel traversal based on ray tracing. The feasibility of our approach was investigated on 27 interventions and compared against manually performed 3D outcome evaluations. The results show that our method can accurately assess the quality and accuracy of the surgery. Our evaluation of the fragment repositioning showed a cumulative error for the coarse and fine alignment of 2.1 mm. Our evaluation of screw placement accuracy resulted in a distance error of 1.32 mm for screw head location and an angular deviation of 1.1° for screw axis. As a next step we will explore generalisation capabilities by applying the method to different interventions

    Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies

    Full text link
    first_page loading... settings Open AccessArticle Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies—A Feasibility Study on Cadavers by Joëlle Ackermann 1,2,† [ORCID] , Florentin Liebmann 1,2,*,† [ORCID] , Armando Hoch 3 [ORCID] , Jess G. Snedeker 2,3, Mazda Farshad 3, Stefan Rahm 3, Patrick O. Zingg 3 and Philipp Fürnstahl 1 1 Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland 2 Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland 3 Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland * Author to whom correspondence should be addressed. † These authors contributed equally to this work. Academic Editor: Jiro Tanaka Appl. Sci. 2021, 11(3), 1228; https://doi.org/10.3390/app11031228 Received: 20 December 2020 / Revised: 13 January 2021 / Accepted: 25 January 2021 / Published: 29 January 2021 (This article belongs to the Special Issue Artificial Intelligence (AI) and Virtual Reality (VR) in Biomechanics) Download PDF Browse Figures Citation Export Abstract Augmented reality (AR)-based surgical navigation may offer new possibilities for safe and accurate surgical execution of complex osteotomies. In this study we investigated the feasibility of navigating the periacetabular osteotomy of Ganz (PAO), known as one of the most complex orthopedic interventions, on two cadaveric pelves under realistic operating room conditions. Preoperative planning was conducted on computed tomography (CT)-reconstructed 3D models using an in-house developed software, which allowed creating cutting plane objects for planning of the osteotomies and reorientation of the acetabular fragment. An AR application was developed comprising point-based registration, motion compensation and guidance for osteotomies as well as fragment reorientation. Navigation accuracy was evaluated on CT-reconstructed 3D models, resulting in an error of 10.8 mm for osteotomy starting points and 5.4° for osteotomy directions. The reorientation errors were 6.7°, 7.0° and 0.9° for the x-, y- and z-axis, respectively. Average postoperative error of LCE angle was 4.5°. Our study demonstrated that the AR-based execution of complex osteotomies is feasible. Fragment realignment navigation needs further improvement, although it is more accurate than the state of the art in PAO surgery

    Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies—A Feasibility Study on Cadavers

    No full text
    Augmented reality (AR)-based surgical navigation may offer new possibilities for safe and accurate surgical execution of complex osteotomies. In this study we investigated the feasibility of navigating the periacetabular osteotomy of Ganz (PAO), known as one of the most complex orthopedic interventions, on two cadaveric pelves under realistic operating room conditions. Preoperative planning was conducted on computed tomography (CT)-reconstructed 3D models using an in-house developed software, which allowed creating cutting plane objects for planning of the osteotomies and reorientation of the acetabular fragment. An AR application was developed comprising point-based registration, motion compensation and guidance for osteotomies as well as fragment reorientation. Navigation accuracy was evaluated on CT-reconstructed 3D models, resulting in an error of 10.8 mm for osteotomy starting points and 5.4° for osteotomy directions. The reorientation errors were 6.7°, 7.0° and 0.9° for the x-, y- and z-axis, respectively. Average postoperative error of LCE angle was 4.5°. Our study demonstrated that the AR-based execution of complex osteotomies is feasible. Fragment realignment navigation needs further improvement, although it is more accurate than the state of the art in PAO surgery

    Computer-assisted femoral head reduction osteotomies: an approach for anatomic reconstruction of severely deformed Legg-Calvé-Perthes hips. A pilot study of six patients

    Get PDF
    Background Legg–Calvé–Perthes (LCP) is a common orthopedic childhood disease that causes a deformity of the femoral head and to an adaptive deformity of the acetabulum. The altered joint biomechanics can result in early joint degeneration that requires total hip arthroplasty. In 2002, Ganz et al. introduced the femoral head reduction osteotomy (FHRO) as a direct joint-preserving treatment. The procedure remains one of the most challenging in hip surgery. Computer-based 3D preoperative planning and patient-specific navigation instruments have been successfully used to reduce technical complexity in other anatomies. The purpose of this study was to report the first results in the treatment of 6 patients to investigate whether our approach is feasible and safe. Methods In this retrospective pilot study, 6 LCP patients were treated with FHRO in multiple centers between May 2017 and June 2019. Based on patient-specific 3D-models of the hips, the surgeries were simulated in a step-wise fashion. Patient-specific instruments tailored for FHRO were designed, 3D-printed and used in the surgeries for navigating the osteotomies. The results were assessed radiographically [diameter index, sphericity index, Stulberg classification, extrusion index, LCE-, Tönnis-, CCD-angle and Shenton line] and the time and costs were recorded. Radiologic values were tested for normal distribution using the Shapiro–Wilk test and for significance using Wilcoxon signed-rank test. Results The sphericity index improved postoperatively by 20% (p = 0.028). The postoperative diameter of the femoral head differed by only 1.8% (p = 0.043) from the contralateral side and Stulberg grading improved from poor coxarthrosis outcome to good outcome (p = 0.026). All patients underwent acetabular reorientation by periacetabular osteotomy. The average time (in minutes) for preliminary analysis, computer simulation and patient-specific instrument design was 63 (±48), 156 (±64) and 105 (±68.5), respectively. Conclusion The clinical feasibility of our approach to FHRO has been demonstrated. The results showed significant improvement compared to the preoperative situation. All operations were performed by experienced surgeons; nevertheless, three complications occurred, showing that FHRO remains one of the most complex hip surgeries even with computer assistance. However, none of the complications were directly related to the simulation or the navigation technique.ISSN:1471-247

    Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions

    No full text
    In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of Periacetabular Osteotomy of Ganz (PAO). A typical orthopaedic intervention involves cutting bone, anatomy manipulation and repositioning as well as implant placement. Our method includes a segmentation based deep learning approach for detection and quantification of the cuts. Furthermore, anatomy repositioning was quantified through a multi-step registration method, which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy. Implant (i.e., screw) position was identified by 3D Hough transform for line detection combined with fast voxel traversal based on ray tracing. The feasibility of our approach was investigated on 27 interventions and compared against manually performed 3D outcome evaluations. The results show that our method can accurately assess the quality and accuracy of the surgery. Our evaluation of the fragment repositioning showed a cumulative error for the coarse and fine alignment of 2.1 mm. Our evaluation of screw placement accuracy resulted in a distance error of 1.32 mm for screw head location and an angular deviation of 1.1° for screw axis. As a next step we will explore generalisation capabilities by applying the method to different interventions.ISSN:2313-433

    Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions

    No full text
    In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of Periacetabular Osteotomy of Ganz (PAO). A typical orthopaedic intervention involves cutting bone, anatomy manipulation and repositioning as well as implant placement. Our method includes a segmentation based deep learning approach for detection and quantification of the cuts. Furthermore, anatomy repositioning was quantified through a multi-step registration method, which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy. Implant (i.e., screw) position was identified by 3D Hough transform for line detection combined with fast voxel traversal based on ray tracing. The feasibility of our approach was investigated on 27 interventions and compared against manually performed 3D outcome evaluations. The results show that our method can accurately assess the quality and accuracy of the surgery. Our evaluation of the fragment repositioning showed a cumulative error for the coarse and fine alignment of 2.1 mm. Our evaluation of screw placement accuracy resulted in a distance error of 1.32 mm for screw head location and an angular deviation of 1.1° for screw axis. As a next step we will explore generalisation capabilities by applying the method to different interventions

    Augmented reality based surgical navigation of complex pelvic osteotomies—a feasibility study on cadavers

    No full text
    Augmented reality (AR)-based surgical navigation may offer new possibilities for safe and accurate surgical execution of complex osteotomies. In this study we investigated the feasibility of navigating the periacetabular osteotomy of Ganz (PAO), known as one of the most complex orthopedic interventions, on two cadaveric pelves under realistic operating room conditions. Preoperative planning was conducted on computed tomography (CT)-reconstructed 3D models using an in-house developed software, which allowed creating cutting plane objects for planning of the osteotomies and reorientation of the acetabular fragment. An AR application was developed comprising point-based registration, motion compensation and guidance for osteotomies as well as fragment reorientation. Navigation accuracy was evaluated on CT-reconstructed 3D models, resulting in an error of 10.8 mm for osteotomy starting points and 5.4° for osteotomy directions. The reorientation errors were 6.7°, 7.0° and 0.9° for the x-, y- and z-axis, respectively. Average postoperative error of LCE angle was 4.5°. Our study demonstrated that the AR-based execution of complex osteotomies is feasible. Fragment realignment navigation needs further improvement, although it is more accurate than the state of the art in PAO surgery

    A New Approach to Orthopedic Surgery Planning Using Deep Reinforcement Learning and Simulation

    Full text link
    Computer-assisted orthopedic interventions require surgery planning based on patient-specific three-dimensional anatomical models. The state of the art has addressed the automation of this planning process either through mathematical optimization or supervised learning, the former requiring a handcrafted objective function and the latter sufficient training data. In this paper, we propose a completely model-free and automatic surgery planning approach for femoral osteotomies based on Deep Reinforcement Learning which is capable of generating clinical-grade solutions without needing patient data for training. One of our key contributions is that we solve the real-world task in a simulation environment tailored to orthopedic interventions based on an analytical representation of real patient data, in order to overcome convergence, noise, and dimensionality problems. An agent was trained on simulated anatomy based on Proximal Policy Optimization and inference was performed on real patient data. A qualitative evaluation with expert surgeons and a complementary quantitative analysis demonstrated that our approach was capable of generating clinical-grade planning solutions from unseen data of eleven patient cases. In eight cases, a direct comparison to clinical gold standard (GS) planning solutions was performed, showing our approach to perform equally good or better in 80% (surgeon 1) respectively 100% (surgeon 2) of the cases
    corecore