62 research outputs found

    Computer assistance in orthopaedic surgery

    Get PDF

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies

    Full text link
    first_page loading... settings Open AccessArticle Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies—A Feasibility Study on Cadavers by Joëlle Ackermann 1,2,† [ORCID] , Florentin Liebmann 1,2,*,† [ORCID] , Armando Hoch 3 [ORCID] , Jess G. Snedeker 2,3, Mazda Farshad 3, Stefan Rahm 3, Patrick O. Zingg 3 and Philipp Fürnstahl 1 1 Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland 2 Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland 3 Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland * Author to whom correspondence should be addressed. † These authors contributed equally to this work. Academic Editor: Jiro Tanaka Appl. Sci. 2021, 11(3), 1228; https://doi.org/10.3390/app11031228 Received: 20 December 2020 / Revised: 13 January 2021 / Accepted: 25 January 2021 / Published: 29 January 2021 (This article belongs to the Special Issue Artificial Intelligence (AI) and Virtual Reality (VR) in Biomechanics) Download PDF Browse Figures Citation Export Abstract Augmented reality (AR)-based surgical navigation may offer new possibilities for safe and accurate surgical execution of complex osteotomies. In this study we investigated the feasibility of navigating the periacetabular osteotomy of Ganz (PAO), known as one of the most complex orthopedic interventions, on two cadaveric pelves under realistic operating room conditions. Preoperative planning was conducted on computed tomography (CT)-reconstructed 3D models using an in-house developed software, which allowed creating cutting plane objects for planning of the osteotomies and reorientation of the acetabular fragment. An AR application was developed comprising point-based registration, motion compensation and guidance for osteotomies as well as fragment reorientation. Navigation accuracy was evaluated on CT-reconstructed 3D models, resulting in an error of 10.8 mm for osteotomy starting points and 5.4° for osteotomy directions. The reorientation errors were 6.7°, 7.0° and 0.9° for the x-, y- and z-axis, respectively. Average postoperative error of LCE angle was 4.5°. Our study demonstrated that the AR-based execution of complex osteotomies is feasible. Fragment realignment navigation needs further improvement, although it is more accurate than the state of the art in PAO surgery

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom

    Get PDF
    Purpose: Benefits of minimally invasive neurosurgery mandate the development of ergonomic paradigms for neuronavigation. Augmented Reality (AR) systems can overcome the shortcomings of commercial neuronavigators. The aim of this work is to apply a novel AR system, based on a head-mounted stereoscopic video see-through display, as an aid in complex neurological lesion targeting. Effectiveness was investigated on a newly designed patient-specific head mannequin featuring an anatomically realistic brain phantom with embedded synthetically created tumors and eloquent areas. Materials and methods: A two-phase evaluation process was adopted in a simulated small tumor resection adjacent to Brocaâ\u80\u99s area. Phase I involved nine subjects without neurosurgical training in performing spatial judgment tasks. In Phase II, three surgeons were involved in assessing the effectiveness of the AR-neuronavigator in performing brain tumor targeting on a patient-specific head phantom. Results: Phase I revealed the ability of the AR scene to evoke depth perception under different visualization modalities. Phase II confirmed the potentialities of the AR-neuronavigator in aiding the determination of the optimal surgical access to the surgical target. Conclusions: The AR-neuronavigator is intuitive, easy-to-use, and provides three-dimensional augmented information in a perceptually-correct way. The system proved to be effective in guiding skin incision, craniotomy, and lesion targeting. The preliminary results encourage a structured study to prove clinical effectiveness. Moreover, our testing platform might be used to facilitate training in brain tumour resection procedures

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    The of Application of 3D-Printing to Lumbar Spine Surgery

    Get PDF
    Rapid prototyping refers to the manufacturing process in which a three-dimensional (3D) digital model can be transformed into a physical model by layering material in the shape of successive cross sections atop of previously layers. Rapid prototyping has been increasing in popularity in the field of medicine and surgery due to the ability to personalize various aspects of patient care. The thesis will explore the use of rapid prototyping in lumbar spine surgery, aim to quantify the accuracy of medical imaging when relating to imaged structures and their corresponding models produced by rapid prototyping, and determine if complex patient-specific guides are accurate and safe

    Computer vision techniques for a robot-assisted emergency neurosurgery system

    Get PDF
    This thesis presents the development of computer vision techniques for a robot-assisted emergency neurosurgery system that is being developed by the Mechatronics in Medicine group at Loughborough University, UK, and situates them within the context of the overall project. There are two main contributions in this thesis. The first is the development of a registration framework, to establish spatial correspondence between a preoperative plan of a patient (based on computed tomography images) and the patient. The registration is based on the rigid transformation of homologous anatomical soft tissue point landmarks of the head, the medial canthus and tragus, in CT and patient space. As a step towards automating the registration, a computational framework to localise these landmarks in CT space, with performance comparable to manual localisation, has been developed. The second contribution in this thesis is the development of computer vision techniques for a passive intraoperative supervisory system, based on visual cues from the operative site. Specifically, the feasibility of using computer vision to assess the outcome of a surgical intervention was investigated. The ability to mimic and embody part of a surgeon s visual sensory and decision-making capability is aimed at improving the robustness of the robotic system. Low-level image features to distinguish the two possible outcomes, complete and incomplete, were identified. Encouraging results were obtained for the surgical actions under consideration, which have been demonstrated by experiments on cadaveric pig heads. The results obtained are suggestive of the potential use of computer vision to assist surgical robotics in an operating theatre. The computational approaches developed, to provide greater autonomy to the robotic system, have the potential to improve current practice in robotic surgery. It is not inconceivable that the state of the art in surgical robotics can advance to a stage where it is able to emulate the ability and interpretation process of a surgeon.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    X-ray based machine vision system for distal locking of intramedullary nails

    Get PDF
    In surgical procedures for femoral shaft fracture treatment, current techniques for locking the distal end of intramedullary nails, using two screws, rely heavily on the use of two-dimensional X-ray images to guide three-dimensional bone drilling processes. Therefore, a large number of X-ray images are required, as the surgeon uses his/her skills and experience to locate the distal hole axes on the intramedullary nail. The long-term effects of X-ray radiation and their relation to different types of cancer still remain uncertain. Therefore, there is a need to develop a surgical technique that can limit the use of X-rays during the distal locking procedure. A Robotic-Assisted Orthopaedic Surgery System has been developed at Loughborough University named Loughborough Orthopaedic Assistant System (LOAS) to assist orthopaedic surgeons during distal-locking of intramedullary nails. It uses a calibration frame and a C-arm X-ray unit. The system simplifies the current approach as it uses only two near-orthogonal X-ray images to determine the drilling trajectory of the distal-locking holes, thereby considerably reducing irradiation to both the surgeon and patient. The LOAS differs from existing computer-assisted orthopaedic surgery systems, as it eliminates the need for optical tracking equipment which tends to clutter the operating theatre environment and requires care in maintaining the line of sight. Additionally use of optical tracking equipment makes such systems an expensive method for surgical guidance in distal-locking of intramedullary nails. This study is specifically concerned with the improvements of the existing system. [Continues.

    Concept and Design of a Hand-held Mobile Robot System for Craniotomy

    Get PDF
    This work demonstrates a highly intuitive robot for Surgical Craniotomy Procedures. Utilising a wheeled hand-held robot, to navigate the Craniotomy Drill over a patient\u27s skull, the system does not remove the surgeons from the procedure, but supports them during this critical phase of the operation
    • …
    corecore