8 research outputs found

    The ARIA trial protocol: a randomised controlled trial to assess the clinical, technical, and cost-effectiveness of a cloud-based, ARtificially Intelligent image fusion system in comparison to standard treatment to guide endovascular Aortic aneurysm repair

    Get PDF
    BackgroundEndovascular repair of aortic aneurysmal disease is established due to perceived advantages in patient survival, reduced postoperative complications, and shorter hospital lengths of stay. High spatial and contrast resolution 3D CT angiography images are used to plan the procedures and inform device selection and manufacture, but in standard care, the surgery is performed using image-guidance from 2D X-ray fluoroscopy with injection of nephrotoxic contrast material to visualise the blood vessels. This study aims to assess the benefit to patients, practitioners, and the health service of a novel image fusion medical device (Cydar EV), which allows this high-resolution 3D information to be available to operators at the time of surgery.MethodsThe trial is a multi-centre, open label, two-armed randomised controlled clinical trial of 340 patient, randomised 1:1 to either standard treatment in endovascular aneurysm repair or treatment using Cydar EV, a CE-marked medical device comprising of cloud computing, augmented intelligence, and computer vision. The primary outcome is procedural time, with secondary outcomes of procedural efficiency, technical effectiveness, patient outcomes, and cost-effectiveness. Patients with a clinical diagnosis of AAA or TAAA suitable for endovascular repair and able to provide written informed consent will be invited to participate.DiscussionThis trial is the first randomised controlled trial evaluating advanced image fusion technology in endovascular aortic surgery and is well placed to evaluate the effect of this technology on patient outcomes and cost to the NHS.Trial registrationISRCTN13832085. Dec. 3, 202

    Interacción de los tejidos duros y blandos en implantes inmediatos y diferidos conun diseño experimental en un modelo de perro "beagle". Análisis de los cambios de volumen de los tejidos duros y blandos.

    Get PDF
    Objetivo: Estudiar, por medio de micro--‐CT y análisis de imágenes STL, los cambios en los tejidos duros y blandos utilizando implantes con una forma triangular (test) y una forma cilíndrica (control) en implantes diferidos e inmediatos. Material y Métodos: Se insertaron implantes de titanio test y control en la mandíbula de 8 perros beagle. Cada hemimandíbula recibió dos implantes colocados en crestas cicatrizadas y dos implantes inmediatos. Los implantes test y control se asignaron aleatoriamente a los alveolos post--‐ extracción y crestas cicatrizadas. Se tomaron impresiones de silicona previo a la colocación de los implantes y antes del sacrificio, que tuvo lugar cuatro semanas (T4) o doce semanas (T12) después de la colocación de los implantes. Los modelos dentales de escayola fueron escaneados ópticamente y analizados a través de un software de análisis de imagen para calcular los cambios en los contornos de los tejidos blandos. Las biopsias de tejido se procesaron para el análisis mediante micro‐CT. El contacto hueso--‐implante (BIC) y la relación del volumen del hueso al volumen total de la muestra (BV/TV) se calcularon en un volumen cilíndrico de interés (VOI). A continuación, un VOI especifico en bucal del implante fue seleccionado en todas las muestras para calcular el volumen de hueso , aire e implante y, por último , se seleccionó un tercer dentro del ultimo VOI que solo incluía el hueso bucal y el implante del VOI anterior y se realizó el mismo análisis. Resultados: Al analizar el BIC y BV/TV presentaron valores similares para test y control. Se encontró menor volumen de implante a nivel de los implantes test en todos los sitios sin embargo, 3 no se encontraron diferencias entre los implantes test y control en cuanto al volumen óseo en el VOI bucal. El análisis del VOI bucal óseo dio un porcentaje similar de aire en todas las muestras , lo que indica una composición ósea similar para todos los sitios . El análisis de los contornos de tejido blando no revelo diferencias entre los implantes test y control. Conclusiones: El presente estudio no mostro diferencias entre los implantes de forma triangular y los implantes cónicos con respecto al porcentaje de integración, al volumen del hueso bucal y a los contornos del tejido blando

    X23D-Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data

    Full text link
    Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients' lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow

    Prostate cancer in a new dimension

    Get PDF
    The scope of this thesis is to reveal the three-dimensional morphology of prostate cancer and its benign mimickers and to investigate parameters predictive for patient outcome on prostate cancer biopsies

    Medical Image Registration: Statistical Models of Performance in Relation to the Statistical Characteristics of the Image Data

    Get PDF
    For image-guided interventions, the imaging task often pertains to registering preoperative and intraoperative images within a common coordinate system. While the accuracy of the registration is directly tied to the accuracy of targeting in the intervention (and presumably the success of the medical outcome), there is relatively little quantitative understanding of the fundamental factors that govern image registration accuracy. A statistical framework is presented that relates models of image noise and spatial resolution to the task of registration, giving theoretical limits on registration accuracy and providing guidance for the selection of image acquisition and post-processing parameters. The framework is further shown to model the confounding influence of soft-tissue deformation in rigid image registration — accurately predicting the reduction in registration accuracy and revealing similarity metrics that are robust against such effects. Furthermore, the framework is shown to provide conceptual guidance in the development of a novel CT-to-radiograph registration method that accounts for deformation. The work also examines a learning-based method for deformable registration to investigate how the statistical characteristics of the training data affect the ability of the model to generalize to test data with differing statistical characteristics. The analysis provides insight on the benefits of statistically diverse training data in generalizability of a neural network and is further applied to the development of a learning-based MR-to-CT synthesis method. Overall, the work yields a quantitative approach to theoretically and experimentally relate the accuracy of image registration to the statistical characteristics of the image data, providing a rigorous guide to the development of new registration methods

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications
    corecore