741 research outputs found

    Nextmed: Automatic Imaging Segmentation, 3D Reconstruction, and 3D Model Visualization Platform Using Augmented and Virtual Reality

    Get PDF
    The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization

    Need for speed:Achieving fast image processing in acute stroke care

    Get PDF
    This thesis aims to investigate the use of high-performance computing (HPC) techniques in developing imaging biomarkers to support the clinical workflow of acute stroke patients. In the first part of this thesis, we evaluate different HPC technologies and how such technologies can be leveraged by different image analysis applications used in the context of acute stroke care. More specifically, we evaluated how computers with multiple computing devices can be used to accelerate medical imaging applications in Chapter 2. Chapter 3 proposes a novel data compression technique that efficiently processes CT perfusion (CTP) images in GPUs. Unfortunately, the size of CTP datasets makes data transfers to computing devices time-consuming and, therefore, unsuitable in acute situations. Chapter 4 further evaluates the algorithm's usefulness proposed in Chapter 3 with two different applications: a double threshold segmentation and a time-intensity profile similarity (TIPS) bilateral filter to reduce noise in CTP scans. Finally, Chapter 5 presents a cloud platform for deploying high-performance medical applications for acute stroke patients. In Part 2 of this thesis, Chapter 6 presents a convolutional neural network (CNN) for detecting and volumetric segmentation of subarachnoid hemorrhages (SAH) in non-contrast CT scans. Chapter 7 proposed another method based on CNNs to quantify the final infarct volumes in follow-up non-contrast CT scans from ischemic stroke patients

    Ubiquitous volume rendering in the web platform

    Get PDF
    176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium

    Im2mesh: A Python Library to Reconstruct 3D Meshes from Scattered Data and 2D Segmentations, Application to Patient-Specific Neuroblastoma Tumour Image Sequences

    Get PDF
    The future of personalised medicine lies in the development of increasingly sophisticated digital twins, where the patient-specific data is fed into predictive computational models that support the decisions of clinicians on the best therapies or course actions to treat the patient’s afflictions. The development of these personalised models from image data requires a segmentation of the geometry of interest, an estimation of intermediate or missing slices, a reconstruction of the surface and generation of a volumetric mesh and the mapping of the relevant data into the reconstructed three-dimensional volume. There exist a wide number of tools, including both classical and artificial intelligence methodologies, that help to overcome the difficulties in each stage, usually relying on the combination of different software in a multistep process. In this work, we develop an all-in-one approach wrapped in a Python library called im2mesh that automatizes the whole workflow, which starts reading a clinical image and ends generating a 3D finite element mesh with the interpolated patient data. In this work, we apply this workflow to an example of a patient-specific neuroblastoma tumour. The main advantages of our tool are its straightforward use and its easy integration into broader pipelines

    Applied medical informatics for neuroanatomy training

    Get PDF
    [EN]In recent years, the efforts to apply developments in medical informatics within training contexts have increased. The objective of this paper is to illustrate the benefits associated to the use of three-dimensional visualization digital systems, in a neuroanatomical training context, and evaluate the satisfaction level and perceived usefulness of these tools by students. The three dimensional models generated allowed the anatomical interactive study of brain structures and their spatial relationships in a complete, realistic and visually appealing manner for students, regardless of previous visuo-spatial skills

    Towards Automatic Prediction of Outcome in Treatment of Cerebral Aneurysms

    Full text link
    Intrasaccular flow disruptors treat cerebral aneurysms by diverting the blood flow from the aneurysm sac. Residual flow into the sac after the intervention is a failure that could be due to the use of an undersized device, or to vascular anatomy and clinical condition of the patient. We report a machine learning model based on over 100 clinical and imaging features that predict the outcome of wide-neck bifurcation aneurysm treatment with an intravascular embolization device. We combine clinical features with a diverse set of common and novel imaging measurements within a random forest model. We also develop neural network segmentation algorithms in 2D and 3D to contour the sac in angiographic images and automatically calculate the imaging features. These deliver 90% overlap with manual contouring in 2D and 83% in 3D. Our predictive model classifies complete vs. partial occlusion outcomes with an accuracy of 75.31%, and weighted F1-score of 0.74.Comment: 10 page

    Ubiquitous volume rendering in the web platform

    Get PDF
    176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium
    corecore