817 research outputs found

    Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery

    Get PDF
    Purpose: Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. Methods: The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model.Results: The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09mm (translations) and of (Formula presented.) (rotations), maximum observed errors in the order of 0.12mm (translations) and of (Formula presented.) (rotations), and a reduction repeatability of 0.02mm and (Formula presented.). Conclusions: The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality

    Image-Guided Surgical Robotic System for Percutaneous Reduction of Joint Fractures

    Get PDF
    Complex joint fractures often require an open surgical procedure, which is associated with extensive soft tissue damages and longer hospitalization and rehabilitation time. Percutaneous techniques can potentially mitigate these risks but their application to joint fractures is limited by the current sub-optimal 2D intra-operative imaging (fluoroscopy) and by the high forces involved in the fragment manipulation (due to the presence of soft tissue, e.g., muscles) which might result in fracture malreduction. Integration of robotic assistance and 3D image guidance can potentially overcome these issues. The authors propose an image-guided surgical robotic system for the percutaneous treatment of knee joint fractures, i.e., the robot-assisted fracture surgery (RAFS) system. It allows simultaneous manipulation of two bone fragments, safer robot-bone fixation system, and a traction performing robotic manipulator. This system has led to a novel clinical workflow and has been tested both in laboratory and in clinically relevant cadaveric trials. The RAFS system was tested on 9 cadaver specimens and was able to reduce 7 out of 9 distal femur fractures (T- and Y-shape 33-C1) with acceptable accuracy (≈1 mm, ≈5°), demonstrating its applicability to fix knee joint fractures. This study paved the way to develop novel technologies for percutaneous treatment of complex fractures including hip, ankle, and shoulder, thus representing a step toward minimally-invasive fracture surgeries

    Fracture Detection in Traumatic Pelvic CT Images

    Get PDF
    Fracture detection in pelvic bones is vital for patient diagnostic decisions and treatment planning in traumatic pelvic injuries. Manual detection of bone fracture from computed tomography (CT) images is very challenging due to low resolution of the images and the complex pelvic structures. Automated fracture detection from segmented bones can significantly help physicians analyze pelvic CT images and detect the severity of injuries in a very short period. This paper presents an automated hierarchical algorithm for bone fracture detection in pelvic CT scans using adaptive windowing, boundary tracing, and wavelet transform while incorporating anatomical information. Fracture detection is performed on the basis of the results of prior pelvic bone segmentation via our registered active shape model (RASM). The results are promising and show that the method is capable of detecting fractures accurately

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    Segmentation and Fracture Detection in CT Images for Traumatic Pelvic Injuries

    Get PDF
    In recent decades, more types and quantities of medical data have been collected due to advanced technology. A large number of significant and critical information is contained in these medical data. High efficient and automated computational methods are urgently needed to process and analyze all available medical data in order to provide the physicians with recommendations and predictions on diagnostic decisions and treatment planning. Traumatic pelvic injury is a severe yet common injury in the United States, often caused by motor vehicle accidents or fall. Information contained in the pelvic Computed Tomography (CT) images is very important for assessing the severity and prognosis of traumatic pelvic injuries. Each pelvic CT scan includes a large number of slices. Meanwhile, each slice contains a large quantity of data that may not be thoroughly and accurately analyzed via simple visual inspection with the desired accuracy and speed. Hence, a computer-assisted pelvic trauma decision-making system is needed to assist physicians in making accurate diagnostic decisions and determining treatment planning in a short period of time. Pelvic bone segmentation is a vital step in analyzing pelvic CT images and assisting physicians with diagnostic decisions in traumatic pelvic injuries. In this study, a new hierarchical segmentation algorithm is proposed to automatically extract multiplelevel bone structures using a combination of anatomical knowledge and computational techniques. First, morphological operations, image enhancement, and edge detection are performed for preliminary bone segmentation. The proposed algorithm then uses a template-based best shape matching method that provides an entirely automated segmentation process. This is followed by the proposed Registered Active Shape Model (RASM) algorithm that extracts pelvic bone tissues using more robust training models than the Standard ASM algorithm. In addition, a novel hierarchical initialization process for RASM is proposed in order to address the shortcoming of the Standard ASM, i.e. high sensitivity to initialization. Two suitable measures are defined to evaluate the segmentation results: Mean Distance and Mis-segmented Area to quantify the segmentation accuracy. Successful segmentation results indicate effectiveness and robustness of the proposed algorithm. Comparison of segmentation performance is also conducted using both the proposed method and the Snake method. A cross-validation process is designed to demonstrate the effectiveness of the training models. 3D pelvic bone models are built after pelvic bone structures are segmented from consecutive 2D CT slices. Automatic and accurate detection of the fractures from segmented bones in traumatic pelvic injuries can help physicians detect the severity of injuries in patients. The extraction of fracture features (such as presence and location of fractures) as well as fracture displacement measurement, are vital for assisting physicians in making faster and more accurate decisions. In this project, after bone segmentation, fracture detection is performed using a hierarchical algorithm based on wavelet transformation, adaptive windowing, boundary tracing and masking. Also, a quantitative measure of fracture severity based on pelvic CT scans is defined and explored. The results are promising, demonstrating that the proposed method not only capable of automatically detecting both major and minor fractures, but also has potentials to be used for clinical applications

    Optimal orientation estimators for detection of cylindrical objects

    Get PDF
    International audienceThis paper introduces low level operators in the context of detecting cylindrical axis in 3 D images. Knowing the axis of a cylinder is particularly useful since cylinder location, length and curvature derive from this knowledge. This paper introduces a new gradient-based optimal operator dedicated to accurate estimation of the direction toward the axis. The operator relies on Finite Impulse Response filters. The approach is presented first in a 2-D context, thus providing optimal gradient masks for locating the center of circular objects. Then, a 3-D extension is provided, allowing the exact estimation of the orientation toward the axis of cylindrical objects when this axis coincides with one of the mask reference axes. Applied to more general cylinders and to noisy data, the operator still provides accurate estimation and outperforms classical gradient operators

    Advances in identifying osseous fractured areas and virtually reducing bone fractures

    Get PDF
    [ES]Esta tesis pretende el desarrollo de técnicas asistidas por ordenador para ayudar a los especialistas durante la planificación preoperatoria de una reducción de fractura ósea. Como resultado, puede reducirse el tiempo de intervención y pueden evitarse errores de interpretación, con los consecuentes beneficios en el tratamiento y en el tiempo de recuperación del paciente. La planificación asistida por ordenador de una reducción de fractura ósea puede dividirse en tres grandes etapas: identificación de fragmentos óseos a partir de imágenes médicas, cálculo de la reducción y posterior estabilización de la fractura, y evaluación de los resultados obtenidos. La etapa de identificación puede incluir también la generación de modelos 3D de fragmentos óseos. Esta tesis aborda la identificación de fragmentos óseos a partir de imágenes médicas generadas por TC, la generación de modelos 3D de fragmentos, y el cálculo de la reducción de fracturas, sin incluir el uso de elementos de fijación.[EN]The aim of this work is the development of computer-assisted techniques for helping specialists in the pre-operative planning of bone fracture reduction. As a result, intervention time may be reduced and potential misinterpretations circumvented, with the consequent benefits in the treatment and recovery time of the patient. The computer-assisted planning of a bone fracture reduction may be divided into three main stages: identification of bone fragments from medical images, computation of the reduction and subsequent stabilization of the fracture, and evaluation of the obtained results. The identification stage may include the generation of 3D models of bone fragments, with the purpose of obtaining useful models for the two subsequent stages. This thesis deals with the identification of bone fragments from CT scans, the generation of 3D models of bone fragments, and the computation of the fracture reduction excluding the use of fixation devices.Tesis Univ. Jaén. Departamento de Informática. Leída 19 de septiembre de 201

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Book of Abstracts 15th International Symposium on Computer Methods in Biomechanics and Biomedical Engineering and 3rd Conference on Imaging and Visualization

    Get PDF
    In this edition, the two events will run together as a single conference, highlighting the strong connection with the Taylor & Francis journals: Computer Methods in Biomechanics and Biomedical Engineering (John Middleton and Christopher Jacobs, Eds.) and Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization (JoãoManuel R.S. Tavares, Ed.). The conference has become a major international meeting on computational biomechanics, imaging andvisualization. In this edition, the main program includes 212 presentations. In addition, sixteen renowned researchers will give plenary keynotes, addressing current challenges in computational biomechanics and biomedical imaging. In Lisbon, for the first time, a session dedicated to award the winner of the Best Paper in CMBBE Journal will take place. We believe that CMBBE2018 will have a strong impact on the development of computational biomechanics and biomedical imaging and visualization, identifying emerging areas of research and promoting the collaboration and networking between participants. This impact is evidenced through the well-known research groups, commercial companies and scientific organizations, who continue to support and sponsor the CMBBE meeting series. In fact, the conference is enriched with five workshops on specific scientific topics and commercial software.info:eu-repo/semantics/draf
    corecore