5,151 research outputs found

    Recent trends, technical concepts and components of computer-assisted orthopedic surgery systems: A comprehensive review

    Get PDF
    Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.Web of Science1923art. no. 519

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Implementation of safe human robot collaboration for ultrasound guided radiation therapy

    Get PDF
    This thesis shows that safe human-robot-interaction and Human Robot Collaboration is possible for Ultrasound (US) guided radiotherapy. Via the chosen methodology, all components (US, optical room monitoring and robot) could be linked and integrated and realized in a realistic clinical workflow. US guided radiotherapy offers a complement and alternative to existing image-guided therapy approaches. The real-time capability of US and high soft tissue contrast allow target structures to be tracked and radiation delivery to be modulated. However, Ultrasound guided radiation therapy (USgRT) is not yet clinically established but is still under development, as reliable and safe methods of image acquisition are not yet available. In particular, the loss of contact of the US probe to the patient surface poses a problem for patient movements such as breathing. For this purpose, a Breathing and motion compensation (BaMC) was developed in this work, which together with the safe control of a lightweight robot represents a new development for USgRT. The developed BaMC can be used to control the US probe with contact to the patient. The conducted experiments have confirmed that a steady contact with the patient surface and thus a continuous image acquisition can be ensured by the developed methodology. In addition, the image position in space can be accurately maintained in the submillimeter range. The BaMC seamlessly integrates into a developed clinical workflow. The graphical user interfaces developed for this purpose, as well as direct haptic control with the robot, provide an easy interaction option for the clinical user. The developed autonomous positioning of the transducer represents a good example of the feasibility of the approach. With the help of the user interface, an acoustic plane can be defined and autonomously approached via the robot in a time-efficient and precise manner. The tests carried out show that this methodology is suitable for a wide range of transducer positions. Safety in a human-robot interaction task is essential and requires individually customized concepts. In this work, adequate monitoring mechanisms could be found to ensure both patient and staff safety. In collision tests it could be shown that the implemented detection measures work and that the robot moves into a safe parking position. The forces acting on the patient could thus be pushed well below the limits required by the standard. This work has demonstrated the first important steps towards safe robot-assisted ultrasound imaging, which is not only applicable to USgRT. The developed interfaces provide the basis for further investigations in this field, especially in the area of image recognition, for example to determine the position of the target structure. With the proof of safety of the developed system, first study in human can now follow

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Full text link
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Get PDF
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    Towards markerless orthopaedic navigation with intuitive Optical See-through Head-mounted displays

    Get PDF
    The potential of image-guided orthopaedic navigation to improve surgical outcomes has been well-recognised during the last two decades. According to the tracked pose of target bone, the anatomical information and preoperative plans are updated and displayed to surgeons, so that they can follow the guidance to reach the goal with higher accuracy, efficiency and reproducibility. Despite their success, current orthopaedic navigation systems have two main limitations: for target tracking, artificial markers have to be drilled into the bone and calibrated manually to the bone, which introduces the risk of additional harm to patients and increases operating complexity; for guidance visualisation, surgeons have to shift their attention from the patient to an external 2D monitor, which is disruptive and can be mentally stressful. Motivated by these limitations, this thesis explores the development of an intuitive, compact and reliable navigation system for orthopaedic surgery. To this end, conventional marker-based tracking is replaced by a novel markerless tracking algorithm, and the 2D display is replaced by a 3D holographic Optical see-through (OST) Head-mounted display (HMD) precisely calibrated to a user's perspective. Our markerless tracking, facilitated by a commercial RGBD camera, is achieved through deep learning-based bone segmentation followed by real-time pose registration. For robust segmentation, a new network is designed and efficiently augmented by a synthetic dataset. Our segmentation network outperforms the state-of-the-art regarding occlusion-robustness, device-agnostic behaviour, and target generalisability. For reliable pose registration, a novel Bounded Iterative Closest Point (BICP) workflow is proposed. The improved markerless tracking can achieve a clinically acceptable error of 0.95 deg and 2.17 mm according to a phantom test. OST displays allow ubiquitous enrichment of perceived real world with contextually blended virtual aids through semi-transparent glasses. They have been recognised as a suitable visual tool for surgical assistance, since they do not hinder the surgeon's natural eyesight and require no attention shift or perspective conversion. The OST calibration is crucial to ensure locational-coherent surgical guidance. Current calibration methods are either human error-prone or hardly applicable to commercial devices. To this end, we propose an offline camera-based calibration method that is highly accurate yet easy to implement in commercial products, and an online alignment-based refinement that is user-centric and robust against user error. The proposed methods are proven to be superior to other similar State-of- the-art (SOTA)s regarding calibration convenience and display accuracy. Motivated by the ambition to develop the world's first markerless OST navigation system, we integrated the developed markerless tracking and calibration scheme into a complete navigation workflow designed for femur drilling tasks during knee replacement surgery. We verify the usability of our designed OST system with an experienced orthopaedic surgeon by a cadaver study. Our test validates the potential of the proposed markerless navigation system for surgical assistance, although further improvement is required for clinical acceptance.Open Acces

    Teleoperation of MRI-Compatible Robots with Hybrid Actuation and Haptic Feedback

    Get PDF
    Image guided surgery (IGS), which has been developing fast recently, benefits significantly from the superior accuracy of robots and magnetic resonance imaging (MRI) which is a great soft tissue imaging modality. Teleoperation is especially desired in the MRI because of the highly constrained space inside the closed-bore MRI and the lack of haptic feedback with the fully autonomous robotic systems. It also very well maintains the human in the loop that significantly enhances safety. This dissertation describes the development of teleoperation approaches and implementation on an example system for MRI with details of different key components. The dissertation firstly describes the general teleoperation architecture with modular software and hardware components. The MRI-compatible robot controller, driving technology as well as the robot navigation and control software are introduced. As a crucial step to determine the robot location inside the MRI, two methods of registration and tracking are discussed. The first method utilizes the existing Z shaped fiducial frame design but with a newly developed multi-image registration method which has higher accuracy with a smaller fiducial frame. The second method is a new fiducial design with a cylindrical shaped frame which is especially suitable for registration and tracking for needles. Alongside, a single-image based algorithm is developed to not only reach higher accuracy but also run faster. In addition, performance enhanced fiducial frame is also studied by integrating self-resonant coils. A surgical master-slave teleoperation system for the application of percutaneous interventional procedures under continuous MRI guidance is presented. The slave robot is a piezoelectric-actuated needle insertion robot with fiber optic force sensor integrated. The master robot is a pneumatic-driven haptic device which not only controls the position of the slave robot, but also renders the force associated with needle placement interventions to the surgeon. Both of master and slave robots mechanical design, kinematics, force sensing and feedback technologies are discussed. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. MRI compatibility is evaluated extensively. Teleoperated needle steering is also demonstrated under live MR imaging. A control system of a clinical grade MRI-compatible parallel 4-DOF surgical manipulator for minimally invasive in-bore prostate percutaneous interventions through the patient’s perineum is discussed in the end. The proposed manipulator takes advantage of four sliders actuated by piezoelectric motors and incremental rotary encoders, which are compatible with the MRI environment. Two generations of optical limit switches are designed to provide better safety features for real clinical use. The performance of both generations of the limit switch is tested. MRI guided accuracy and MRI-compatibility of whole robotic system is also evaluated. Two clinical prostate biopsy cases have been conducted with this assistive robot

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots
    corecore