586 research outputs found

    3D Reconstruction of CT Scans For Visualization in Virtual Reality

    Get PDF
    Computed tomography allows analyzing the internal structure of an object, which is useful especially in medicine. The standard visualization displays scans in the 2D plane. 3D reconstruction of scans provides a complex image of the morphology of the scanned object. Matlab is a software commonly used for image processing and analysis. It includes Medical Image Processing Toolbox for displaying data from CT scan in DICOM format. However, it is not possible with this toolbox to export the dataset of the image as a 3D object. Therefore, the aim of the paper is the implementation of a toolbox for loading and displaying data as a 3D reconstruction. This toolbox allows the user to export the data in OBJ or STL format. That allows the user (i) to visualize the 3D models in virtual reality and (ii) to prepare the model for 3D printing. The OBJ model is imported to Blender and then exported out with a texture as an object file. In Unity, we created a 3D scene and imported model. The advantage of displaying the 3D model in virtual reality is a more realistic view of the shape and dimension of an object.Výpočetní tomografie umožňuje studovat vnitřní strukturu objektu, což je využíváno především v medicíně. Standardní zobrazovací techniky promítají snímky ve 2D rovině. 3D rekonstrukce snímků přináší komplexní pohled na morfologii snímané tkáně. Matlab je software běžně užívaný v oblasti zpracování a analýze obrazových dat. Zároveň obsahuje nástroj “Image Procesessing Toolbox”, který umožňuje zobrazit CT snímky uchované ve formátu DICOM. Tento nástroj však neumožňuje vyexportovat zobrazený model jako 3D objekt. Cílem tohoto projektu bylo vytvoření nástroje pro načítání a zobrazení zrekonstruovaných 3D modelů. Tento nástroj umožňuje uživateli vyexportovat data v OBJ nebo STL formátu, který umožňuje (i) vizualizovat 3D model ve virtuální realitě a (ii) připravit model vhodný pro 3D tisk. V editor Unity byla vytvořena 3D scéna a do ní byl importován vygenerovaný model. Výhodou zobrazení 3D modelu ve virtuální realitě je přirozený pohled na prostorové uspořádání objektu

    Real-time integration between Microsoft HoloLens 2 and 3D Slicer with demonstration in pedicle screw placement planning

    Get PDF
    We established a direct communication channel between Microsoft HoloLens 2 and 3D Slicer to exchange transform and image messages between the platforms in real time. This allows us to seamlessly display a CT reslice of a patient in the AR world.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research supported by projects PI122/00601 and AC20/00102 (Ministerio de Ciencia, Innovación y Universidades, Instituto de Salud Carlos III, Asociación Española Contra el Cáncer and European Regional Development Fund “Una manera de hacer Europa”), project PerPlanRT (ERA Permed), TED2021-129392B-I00 and TED2021-132200B-I00 (MCIN/AEI/10.13039/501100011033 and European Union “NextGenerationEU”/PRTR) and EU Horizon 2020 research and innovation programme Conex plus UC3M (grant agreement 801538). APC funded by Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2023)

    Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality

    Get PDF
    Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond

    Computer-aided position planning of miniplates to treat facial bone defects

    Full text link
    In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon's desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.Comment: 19 pages, 13 Figures, 2 Table

    Development of an open source software module for enhanced visualization during MR-guided interstitial gynecologic brachytherapy

    Get PDF
    In 2010, gynecologic malignancies were the 4th leading cause of death in U.S. women and for patients with extensive primary or recurrent disease, treatment with interstitial brachytherapy may be an option. However, brachytherapy requires precise insertion of hollow catheters with introducers into the tumor in order to eradicate the cancer. In this study, a software solution to assist interstitial gynecologic brachytherapy has been investigated and the software has been realized as an own module under (3D) Slicer, which is a free open source software platform for (translational) biomedical research. The developed research module allows on-time processing of intra-operative magnetic resonance imaging (iMRI) data over a direct DICOM connection to a MR scanner. Afterwards follows a multi-stage registration of CAD models of the medical brachytherapy devices (template, obturator) to the patient's MR images, enabling the virtual placement of interstitial needles to assist the physician during the intervention.Comment: 9 pages, 6 figure

    Personalized medicine in surgical treatment combining tracking systems, augmented reality and 3D printing

    Get PDF
    Mención Internacional en el título de doctorIn the last twenty years, a new way of practicing medicine has been focusing on the problems and needs of each patient as an individual thanks to the significant advances in healthcare technology, the so-called personalized medicine. In surgical treatments, personalization has been possible thanks to key technologies adapted to the specific anatomy of each patient and the needs of the physicians. Tracking systems, augmented reality (AR), three-dimensional (3D) printing and artificial intelligence (AI) have previously supported this individualized medicine in many ways. However, their independent contributions show several limitations in terms of patient-to-image registration, lack of flexibility to adapt to the requirements of each case, large preoperative planning times, and navigation complexity. The main objective of this thesis is to increase patient personalization in surgical treatments by combining these technologies to bring surgical navigation to new complex cases by developing new patient registration methods, designing patient-specific tools, facilitating access to augmented reality by the medical community, and automating surgical workflows. In the first part of this dissertation, we present a novel framework for acral tumor resection combining intraoperative open-source navigation software, based on an optical tracking system, and desktop 3D printing. We used additive manufacturing to create a patient-specific mold that maintained the same position of the distal extremity during image-guided surgery as in the preoperative images. The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). We achieved an overall accuracy of the system of 1.88 mm evaluated on the patient-specific 3D printed phantoms. Surgical navigation was feasible during both surgeries, allowing surgeons to verify the tumor resection margin. Then, we propose and augmented reality navigation system that uses 3D printed surgical guides with a tracking pattern enabling automatic patient-to-image registration in orthopedic oncology. This specific tool fits on the patient only in a pre-designed location, in this case bone tissue. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing’s sarcoma, and then tested during the actual surgical intervention. The results showed that the surgical guide with the reference marker can be placed precisely with an accuracy of 2 mm and a visualization error lower than 3 mm. The application allowed physicians to visualize the skin, bone, tumor and medical images overlaid on the phantom and patient. To enable the use of AR and 3D printing by inexperienced users without broad technical knowledge, we designed a step-by-step methodology. The proposed protocol describes how to develop an AR smartphone application that allows superimposing any patient-based 3D model onto a real-world environment using a 3D printed marker tracked by the smartphone camera. Our solution brings AR solutions closer to the final clinical user, combining free and open-source software with an open-access protocol. The proposed guide is already helping to accelerate the adoption of these technologies by medical professionals and researchers. In the next section of the thesis, we wanted to show the benefits of combining these technologies during different stages of the surgical workflow in orthopedic oncology. We designed a novel AR-based smartphone application that can display the patient’s anatomy and the tumor’s location. A 3D printed reference marker, designed to fit in a unique position of the affected bone tissue, enables automatic registration. The system has been evaluated in terms of visualization accuracy and usability during the whole surgical workflow on six realistic phantoms achieving a visualization error below 3 mm. The AR system was tested in two clinical cases during surgical planning, patient communication, and surgical intervention. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience In the final section, two surgical navigation systems have been developed and evaluated to guide electrode placement in sacral neurostimulation procedures based on optical tracking and augmented reality. Our results show that both systems could minimize patient discomfort and improve surgical outcomes by reducing needle insertion time and number of punctures. Additionally, we proposed a feasible clinical workflow for guiding SNS interventions with both navigation methodologies, including automatically creating sacral virtual 3D models for trajectory definition using artificial intelligence and intraoperative patient-to-image registration. To conclude, in this thesis we have demonstrated that the combination of technologies such as tracking systems, augmented reality, 3D printing, and artificial intelligence overcomes many current limitations in surgical treatments. Our results encourage the medical community to combine these technologies to improve surgical workflows and outcomes in more clinical scenarios.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretaria: María Arrate Muñoz Barrutia.- Vocal: Csaba Pinte

    Toward Real-Time Video-Enhanced Augmented Reality for Medical Visualization and Simulation

    Get PDF
    In this work we demonstrate two separate forms of augmented reality environments for use with minimally-invasive surgical techniques. In Chapter 2 it is demonstrated how a video feed from a webcam, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the image surface, resulting in a simple augmented reality environment. Chapter 3 details our implementation of a similar system to the one previously mentioned, albeit with an external tracking system. Additionally, we discuss the challenges and considerations for expanding this system to support an external tracking system, specifically the Polaris Spectra optical tracker. Because of the relocation of the tracking origin to a point other than the camera center, there is an additional registration step necessary to establish the position of all components within the scene. This modification is expected to increase accuracy and robustness of the system

    An Augmented Reality Platform for Preoperative Surgical Planning

    Get PDF
    Researching in new technologies for diagnosis, planning and medical treatment have allowed the development of computer tools that provide new ways of representing data obtained from patient's medical images such as computed tomography (CT) and magnetic resonance imaging (MRI). In this sense, augmented reality (AR) technologies provide a new form of data representation by combining the common analysis using images and the ability to superimpose virtual 3D representations of the organs of the human body in the real environment. In this paper the development of a generic computer platform based on augmented reality technology for surgical preoperative planning is presented. In particular, the surgeon can navigate in the 3D models of the patient's organs in order to have the possibility to perfectly understand the anatomy and plan in the best way the surgical procedure. In addition, a touchless interaction with the virtual organs is available thanks to the use of an armband provided of electromiographic muscle sensors. To validate the system, we focused in a navigation through aorta artery for mitral valve repair surgery
    corecore