3,613 research outputs found

    Recent trends, technical concepts and components of computer-assisted orthopedic surgery systems: A comprehensive review

    Get PDF
    Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.Web of Science1923art. no. 519

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Navigated Ultrasound in Laparoscopic Surgery

    Get PDF

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Desktop 3D Printing: Key for Surgical Navigation in Acral Tumors?

    Get PDF
    Surgical navigation techniques have shown potential benefits in orthopedic oncologic surgery. However, the translation of these results to acral tumor resection surgeries is challenging due to the large number of joints with complex movements of the affected areas (located in distal extremities). This study proposes a surgical workflow that combines an intraoperative open-source navigation software, based on a multi-camera tracking, with desktop three-dimensional (3D) printing for accurate navigation of these tumors. Desktop 3D printing was used to fabricate patient-specific 3D printed molds to ensure that the distal extremity is in the same position both in preoperative images and during image-guided surgery (IGS). The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). The validation involved deformation analysis of the 3D-printed mold after sterilization, accuracy of the system in patient-specific 3D-printed phantoms, and feasibility of the workflow during the surgical intervention. The sterilization process did not lead to significant deformations of the mold (mean error below 0.20 mm). The overall accuracy of the system was 1.88 mm evaluated on the phantoms. IGS guidance was feasible during both surgeries, allowing surgeons to verify enough margin during tumor resection. The results obtained have demonstrated the viability of combining open-source navigation and desktop 3D printing for acral tumor surgeries. The suggested framework can be easily personalized to any patient and could be adapted to other surgical scenarios.This work was supported by projects TEC2013-48251-C2-1-R (Ministerio de Economía y Competitividad); PI18/01625 and PI15/02121 (Ministerio de Ciencia, Innovación y Universidades, Instituto de Salud Carlos III and European Regional Development Fund “Una manera de hacer Europa”) and IND2018/TIC-9753 (Comunidad de Madrid).Publicad

    Image-guided liver surgery: intraoperative projection of computed tomography images utilizing tracked ultrasound

    Get PDF
    AbstractBackgroundUltrasound (US) is the most commonly used form of image guidance during liver surgery. However, the use of navigation systems that incorporate instrument tracking and three-dimensional visualization of preoperative tomography is increasing. This report describes an initial experience using an image-guidance system with navigated US.MethodsAn image-guidance system was used in a total of 50 open liver procedures to aid in localization and targeting of liver lesions. An optical tracking system was employed to localize surgical instruments. Customized hardware and calibration of the US transducer were required. The results of three procedures are highlighted in order to illustrate specific navigation techniques that proved useful in the broader patient cohort.ResultsOver a 7-month span, the navigation system assisted in completing 21 (42%) of the procedures, and tracked US alone provided additional information required to perform resection or ablation in six procedures (12%). Average registration time during the three illustrative procedures was <1min. Average set-up time was approximately 5min per procedure.ConclusionsThe Explorer™ Liver guidance system represents novel technology that continues to evolve. This initial experience indicates that image guidance is valuable in certain procedures, specifically in cases in which difficult anatomy or tumour location or echogenicity limit the usefulness of traditional guidance methods

    Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models

    Get PDF
    During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeon’s workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position. One key challenge in this setting is the automatic estimation of the organ’s current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organ’s intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks. To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system. Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction 1.1 Motivation 1.1.1 Navigated Liver Surgery 1.1.2 Laparoscopic Liver Registration 1.2 Challenges in Laparoscopic Liver Registration 1.2.1 Preoperative Model 1.2.2 Intraoperative Data 1.2.3 Fusion/Registration 1.2.4 Data 1.3 Scope and Goals of this Work 1.3.1 Data-Driven, Biomechanical Model 1.3.2 Data-Driven Non-Rigid Registration 1.3.3 Building a Working Prototype 2 State of the Art 2.1 Rigid Registration 2.2 Non-Rigid Liver Registration 2.3 Neural Networks for Simulation and Registration 3 Theoretical Background 3.1 Liver 3.2 Laparoscopic Liver Resection 3.2.1 Staging Procedure 3.3 Biomechanical Simulation 3.3.1 Physical Balance Principles 3.3.2 Material Models 3.3.3 Numerical Solver: The Finite Element Method (FEM) 3.3.4 The Lagrangian Specification 3.4 Variables and Data in Liver Registration 3.4.1 Observable 3.4.2 Unknowns 4 Generating Simulations of Deforming Organs 4.1 Organ Volume 4.2 Forces and Boundary Conditions 4.2.1 Surface Forces 4.2.2 Zero-Displacement Boundary Conditions 4.2.3 Surrounding Tissues and Ligaments 4.2.4 Gravity 4.2.5 Pressure 4.3 Simulation 4.3.1 Static Simulation 4.3.2 Dynamic Simulation 4.4 Surface Extraction 4.4.1 Partial Surface Extraction 4.4.2 Surface Noise 4.4.3 Partial Surface Displacement 4.5 Voxelization 4.5.1 Voxelizing the Liver Geometry 4.5.2 Voxelizing the Displacement Field 4.5.3 Voxelizing Boundary Conditions 4.6 Pruning Dataset - Removing Unwanted Results 4.7 Data Augmentation 5 Deep Neural Networks for Biomechanical Simulation 5.1 Training Data 5.2 Network Architecture 5.3 Loss Functions and Training 6 Deep Neural Networks for Non-Rigid Registration 6.1 Training Data 6.2 Architecture 6.3 Loss 6.4 Training 6.5 Mesh Deformation 6.6 Example Application 7 Intraoperative Prototype 7.1 Image Acquisition 7.2 Stereo Calibration 7.3 Image Rectification, Disparity- and Depth- estimation 7.4 Liver Segmentation 7.4.1 Synthetic Image Generation 7.4.2 Automatic Segmentation 7.4.3 Manual Segmentation Modifier 7.5 SLAM 7.6 Dense Reconstruction 7.7 Rigid Registration 7.8 Non-Rigid Registration 7.9 Rendering 7.10 Robotic Operating System 8 Evaluation 8.1 Evaluation Datasets 8.1.1 In-Silico 8.1.2 Phantom Torso and Liver 8.1.3 In-Vivo, Human, Breathing Motion 8.1.4 In-Vivo, Human, Laparoscopy 8.2 Metrics 8.2.1 Mean Displacement Error 8.2.2 Target Registration Error (TRE) 8.2.3 Champfer Distance 8.2.4 Volumetric Change 8.3 Evaluation of the Synthetic Training Data 8.4 Data-Driven Biomechanical Model (DDBM) 8.4.1 Amount of Intraoperative Surface 8.4.2 Dynamic Simulation 8.5 Volume to Surface Registration Network (V2S-Net) 8.5.1 Amount of Intraoperative Surface 8.5.2 Dependency on Initial Rigid Alignment 8.5.3 Registration Accuracy in Comparison to Surface Noise 8.5.4 Registration Accuracy in Comparison to Material Stiffness 8.5.5 Champfer-Distance vs. Mean Displacement Error 8.5.6 In-vivo, Human Breathing Motion 8.6 Full Intraoperative Pipeline 8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map 8.6.2 Full Pipeline on Laparoscopic Human Data 8.7 Timing 9 Discussion 9.1 Intraoperative Model 9.2 Physical Accuracy 9.3 Limitations in Training Data 9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities 9.5 Ambiguity 9.6 Intraoperative Prototype 10 Conclusion 11 List of Publications List of Figures Bibliograph

    Optimization of craniosynostosis surgery: virtual planning, intraoperative 3D photography and surgical navigation

    Get PDF
    Mención Internacional en el título de doctorCraniosynostosis is a congenital defect defined as the premature fusion of one or more cranial sutures. This fusion leads to growth restriction and deformation of the cranium, caused by compensatory expansion parallel to the fused sutures. Surgical correction is the preferred treatment in most cases to excise the fused sutures and to normalize cranial shape. Although multiple technological advancements have arisen in the surgical management of craniosynostosis, interventional planning and surgical correction are still highly dependent on the subjective assessment and artistic judgment of craniofacial surgeons. Therefore, there is a high variability in individual surgeon performance and, thus, in the surgical outcomes. The main objective of this thesis was to explore different approaches to improve the surgical management of craniosynostosis by reducing subjectivity in all stages of the process, from the preoperative virtual planning phase to the intraoperative performance. First, we developed a novel framework for automatic planning of craniosynostosis surgery that enables: calculating a patient-specific normative reference shape to target, estimating optimal bone fragments for remodeling, and computing the most appropriate configuration of fragments in order to achieve the desired target cranial shape. Our results showed that automatic plans were accurate and achieved adequate overcorrection with respect to normative morphology. Surgeons’ feedback indicated that the integration of this technology could increase the accuracy and reduce the duration of the preoperative planning phase. Second, we validated the use of hand-held 3D photography for intraoperative evaluation of the surgical outcome. The accuracy of this technology for 3D modeling and morphology quantification was evaluated using computed tomography imaging as gold-standard. Our results demonstrated that 3D photography could be used to perform accurate 3D reconstructions of the anatomy during surgical interventions and to measure morphological metrics to provide feedback to the surgical team. This technology presents a valuable alternative to computed tomography imaging and can be easily integrated into the current surgical workflow to assist during the intervention. Also, we developed an intraoperative navigation system to provide real-time guidance during craniosynostosis surgeries. This system, based on optical tracking, enables to record the positions of remodeled bone fragments and compare them with the target virtual surgical plan. Our navigation system is based on patient-specific surgical guides, which fit into the patient’s anatomy, to perform patient-to-image registration. In addition, our workflow does not rely on patient’s head immobilization or invasive attachment of dynamic reference frames. After testing our system in five craniosynostosis surgeries, our results demonstrated a high navigation accuracy and optimal surgical outcomes in all cases. Furthermore, the use of navigation did not substantially increase the operative time. Finally, we investigated the use of augmented reality technology as an alternative to navigation for surgical guidance in craniosynostosis surgery. We developed an augmented reality application to visualize the virtual surgical plan overlaid on the surgical field, indicating the predefined osteotomy locations and target bone fragment positions. Our results demonstrated that augmented reality provides sub-millimetric accuracy when guiding both osteotomy and remodeling phases during open cranial vault remodeling. Surgeons’ feedback indicated that this technology could be integrated into the current surgical workflow for the treatment of craniosynostosis. To conclude, in this thesis we evaluated multiple technological advancements to improve the surgical management of craniosynostosis. The integration of these developments into the surgical workflow of craniosynostosis will positively impact the surgical outcomes, increase the efficiency of surgical interventions, and reduce the variability between surgeons and institutions.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidente: Norberto Antonio Malpica González.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Tamas Ung

    Review on Image Guided Surgery Systems

    Get PDF
    Nowadays modern imaging techniques can grant an excellent quality 3D images that clearly show the anatomy, vascularity, pathology and active functions of the tissues. The ability to register these preoperative images to each other, to offer a comprehensive information, and later the ability to register the image space to the patient space intraoperatively is the core for the image guided surgery systems (IGS). Other main elements of the system include the process of tracking the surgical tools intraoperatively by reflecting their positions within the 3D image model. In some occasions an intraoperative image may be acquired and registered to the preoperative images to make sure the 3D model used to guide the operation describes the actual situation at surgery time. This survey overviews the history of IGS and discusses the modern system components for a reliable application and gives information about the different applications in medical specialties that benefited from the use of IGS
    corecore