173 research outputs found

    Markerless navigation system for orthopaedic knee surgery: a proof of concept study

    Get PDF
    Current computer-assisted surgical navigation systems mainly rely on optical markers screwed into the bone for anatomy tracking. The insertion of these percutaneous markers increases operating complexity and causes additional harm to the patient. A markerless tracking and registration algorithm has recently been proposed to avoid anatomical markers for knee surgery. The femur points were directly segmented from the recorded RGBD scene by a neural network and then registered to a pre-scanned femur model for the real-time pose. However, in a practical setup such a method can produce unreliable registration results, especially in rotation. Furthermore, its potential application in surgical navigation has not been demonstrated. In this paper, we first improved markerless registration accuracy by adopting a bounded-ICP (BICP) technique, where an estimate of the remote hip centre, acquired also in a markerless way, was employed to constrain distal femur alignment. Then, a proof-of-concept markerless navigation system was proposed to assist in typical knee drilling tasks. Two example setups for global anchoring were proposed and tested on a phantom leg. Our BICP-based markerless tracking and registration method has better angular accuracy and stability than the original method, bringing our straightforward, less invasive markerless navigation approach one step closer to clinical application. According to user tests, our proposed optically anchored navigation system achieves comparable accuracy with the state-of-the-art (3.64± 1.49 mm in position and 2.13±0.81° in orientation). Conversely, our visually anchored, optical tracker-free setup has a lower accuracy (5.86± 1.63 mm in position and 4.18±1.44° in orientation), but is more cost-effective and flexible in the operating room

    Towards markerless orthopaedic navigation with intuitive Optical See-through Head-mounted displays

    Get PDF
    The potential of image-guided orthopaedic navigation to improve surgical outcomes has been well-recognised during the last two decades. According to the tracked pose of target bone, the anatomical information and preoperative plans are updated and displayed to surgeons, so that they can follow the guidance to reach the goal with higher accuracy, efficiency and reproducibility. Despite their success, current orthopaedic navigation systems have two main limitations: for target tracking, artificial markers have to be drilled into the bone and calibrated manually to the bone, which introduces the risk of additional harm to patients and increases operating complexity; for guidance visualisation, surgeons have to shift their attention from the patient to an external 2D monitor, which is disruptive and can be mentally stressful. Motivated by these limitations, this thesis explores the development of an intuitive, compact and reliable navigation system for orthopaedic surgery. To this end, conventional marker-based tracking is replaced by a novel markerless tracking algorithm, and the 2D display is replaced by a 3D holographic Optical see-through (OST) Head-mounted display (HMD) precisely calibrated to a user's perspective. Our markerless tracking, facilitated by a commercial RGBD camera, is achieved through deep learning-based bone segmentation followed by real-time pose registration. For robust segmentation, a new network is designed and efficiently augmented by a synthetic dataset. Our segmentation network outperforms the state-of-the-art regarding occlusion-robustness, device-agnostic behaviour, and target generalisability. For reliable pose registration, a novel Bounded Iterative Closest Point (BICP) workflow is proposed. The improved markerless tracking can achieve a clinically acceptable error of 0.95 deg and 2.17 mm according to a phantom test. OST displays allow ubiquitous enrichment of perceived real world with contextually blended virtual aids through semi-transparent glasses. They have been recognised as a suitable visual tool for surgical assistance, since they do not hinder the surgeon's natural eyesight and require no attention shift or perspective conversion. The OST calibration is crucial to ensure locational-coherent surgical guidance. Current calibration methods are either human error-prone or hardly applicable to commercial devices. To this end, we propose an offline camera-based calibration method that is highly accurate yet easy to implement in commercial products, and an online alignment-based refinement that is user-centric and robust against user error. The proposed methods are proven to be superior to other similar State-of- the-art (SOTA)s regarding calibration convenience and display accuracy. Motivated by the ambition to develop the world's first markerless OST navigation system, we integrated the developed markerless tracking and calibration scheme into a complete navigation workflow designed for femur drilling tasks during knee replacement surgery. We verify the usability of our designed OST system with an experienced orthopaedic surgeon by a cadaver study. Our test validates the potential of the proposed markerless navigation system for surgical assistance, although further improvement is required for clinical acceptance.Open Acces

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Development of an augmented reality guided computer assisted orthopaedic surgery system

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively

    Computer Vision Solutions for Range of Motion Assessment

    Get PDF
    Joint range of motion (ROM) is an important indicator of physical functionality and musculoskeletal health. In sports, athletes require adequate levels of joint mobility to minimize the risk of injuries and maximize performance, while in rehabilitation, restoring joint ROM is essential for faster recovery and improved physical function. Traditional methods for measuring ROM include goniometry, inclinometry and visual estimation; all of which are limited in accuracy due to the subjective nature of the assessment. With the rapid development of technology, new systems based on computer vision are continuously introduced as a possible solution for more objective and accurate measurements of the range of motion. Therefore, this article aimed to evaluate novel computer vision-based systems based on their accuracy and practical applicability for a range of motion assessment. The review covers a variety of systems, including motion-capture systems (2D and 3D cameras), RGB-Depth cameras, commercial software systems and smartphone apps. Furthermore, this article also highlights the potential limitations of these systems and explores their potential future applications in sports and rehabilitation

    Investigation of in-vivo hindfoot and orthotic interactions using bi-planar x-ray fluoroscopy

    Get PDF
    A markerless RSA method was used to determine the effect of orthotics on the normal, pes planus and pes cavus populations. Computed tomography (CT) was used to create bone models that were imported into the virtual environment. Joint coordinate systems were developed to measure kinematic changes in the hindfoot during weight-bearing gait and quiet standing. The objectives of this thesis were to (1) implement a fluoroscopy-based markerless RSA system on the foot, (2) determine the effect of various orthotics at midstance of fully weight-bearing dynamic gait, and (3) determine the effect of orthotics as measured using three different techniques. Every individual in this study reacted differently depending on the footwear condition tested. Despite the change in alignment caused by orthotics lacking statistical significance it appears the change may be significant with more subjects. Fluoroscopy should enable substantial improvements in orthotic design for optimal results in the future

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Hip Joint Center Localization with an Unscented Kalman Filter

    Get PDF
    The accurate estimation of the hip joint centre (HJC) in gait analysis and in computer assisted orthopaedic procedures is a basic requirement. Functional methods, based on rigid body localisation, assessing the kinematics of the femur during circumduction movements (pivoting) have been used for estimating the HJC. Localising the femoral segment only, as it is usually done in total knee replacement procedure, can give rise to estimation errors, since the pelvis, during the passive pivoting manoeuvre, might undergo spatial displacements. This paper presents the design and test of an unscented Kalman filter that allows the estimation of the HJC by observing the pose of the femur and the 3D coordinates of a single marker attached to the pelvis. This new approach was validated using a hip joint mechanical simulator, mimicking both hard and soft tissues. The algorithm performances were compared with the literature standards and proved to have better performances in case of pelvis translation greater than 8 mm, thus satisfying the clinical requirements of the application

    ADVANCED IMAGING AND ROBOTICS TECHNOLOGIES FOR MEDICAL APPLICATIONS

    Get PDF
    Due to the importance of surgery in the medical field, a large amount of research has been conducted in this area. Imaging and robotics technologies provide surgeons with the advanced eye and hand to perform their surgeries in a safer and more accurate manner. Recently medical images have been utilized in the operating room as well as in the diagnostic stage. If the image to patient registration is done with sufficient accuracy, medical images can be used as "a map" for guidance to the target lesion. However, the accuracy and reliability of the surgical navigation system should be sufficiently verified before applying it to the patient. Along with the development of medical imaging, various medical robots have also been developed. In particular, surgical robots have been researched in order to reach the goal of minimal invasiveness. The most important factors to consider are determining the demand, the strategy for their use in operating procedures, and how it aids patients. In addition to the above considerations, medical doctors and researchers should always think from the patient's point of view. In this article, the latest medical imaging and robotic technologies focusing on surgical applications are reviewed based upon the factors described in the above. © 2011 Copyright Taylor and Francis Group, LLC.1

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces
    corecore