934 research outputs found

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    The residual STL volume as a metric to evaluate accuracy and reproducibility of anatomic models for 3D printing: application in the validation of 3D-printable models of maxillofacial bone from reduced radiation dose CT images.

    Get PDF
    BackgroundThe effects of reduced radiation dose CT for the generation of maxillofacial bone STL models for 3D printing is currently unknown. Images of two full-face transplantation patients scanned with non-contrast 320-detector row CT were reconstructed at fractions of the acquisition radiation dose using noise simulation software and both filtered back-projection (FBP) and Adaptive Iterative Dose Reduction 3D (AIDR3D). The maxillofacial bone STL model segmented with thresholding from AIDR3D images at 100 % dose was considered the reference. For all other dose/reconstruction method combinations, a "residual STL volume" was calculated as the topologic subtraction of the STL model derived from that dataset from the reference and correlated to radiation dose.ResultsThe residual volume decreased with increasing radiation dose and was lower for AIDR3D compared to FBP reconstructions at all doses. As a fraction of the reference STL volume, the residual volume decreased from 2.9 % (20 % dose) to 1.4 % (50 % dose) in patient 1, and from 4.1 % to 1.9 %, respectively in patient 2 for AIDR3D reconstructions. For FBP reconstructions it decreased from 3.3 % (20 % dose) to 1.0 % (100 % dose) in patient 1, and from 5.5 % to 1.6 %, respectively in patient 2. Its morphology resembled a thin shell on the osseous surface with average thickness <0.1 mm.ConclusionThe residual volume, a topological difference metric of STL models of tissue depicted in DICOM images supports that reduction of CT dose by up to 80 % of the clinical acquisition in conjunction with iterative reconstruction yields maxillofacial bone models accurate for 3D printing

    Case Studies on X-Ray Imaging, MRI and Nuclear Imaging

    Full text link
    The field of medical imaging is an essential aspect of the medical sciences, involving various forms of radiation to capture images of the internal tissues and organs of the body. These images provide vital information for clinical diagnosis, and in this chapter, we will explore the use of X-ray, MRI, and nuclear imaging in detecting severe illnesses. However, manual evaluation and storage of these images can be a challenging and time-consuming process. To address this issue, artificial intelligence (AI)-based techniques, particularly deep learning (DL), have become increasingly popular for systematic feature extraction and classification from imaging modalities, thereby aiding doctors in making rapid and accurate diagnoses. In this review study, we will focus on how AI-based approaches, particularly the use of Convolutional Neural Networks (CNN), can assist in disease detection through medical imaging technology. CNN is a commonly used approach for image analysis due to its ability to extract features from raw input images, and as such, will be the primary area of discussion in this study. Therefore, we have considered CNN as our discussion area in this study to diagnose ailments using medical imaging technology.Comment: 14 pages, 3 figures, 4 tables; Acceptance of the chapter for the Springer book "Data-driven approaches to medical imaging

    Personalized medicine in surgical treatment combining tracking systems, augmented reality and 3D printing

    Get PDF
    Mención Internacional en el título de doctorIn the last twenty years, a new way of practicing medicine has been focusing on the problems and needs of each patient as an individual thanks to the significant advances in healthcare technology, the so-called personalized medicine. In surgical treatments, personalization has been possible thanks to key technologies adapted to the specific anatomy of each patient and the needs of the physicians. Tracking systems, augmented reality (AR), three-dimensional (3D) printing and artificial intelligence (AI) have previously supported this individualized medicine in many ways. However, their independent contributions show several limitations in terms of patient-to-image registration, lack of flexibility to adapt to the requirements of each case, large preoperative planning times, and navigation complexity. The main objective of this thesis is to increase patient personalization in surgical treatments by combining these technologies to bring surgical navigation to new complex cases by developing new patient registration methods, designing patient-specific tools, facilitating access to augmented reality by the medical community, and automating surgical workflows. In the first part of this dissertation, we present a novel framework for acral tumor resection combining intraoperative open-source navigation software, based on an optical tracking system, and desktop 3D printing. We used additive manufacturing to create a patient-specific mold that maintained the same position of the distal extremity during image-guided surgery as in the preoperative images. The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). We achieved an overall accuracy of the system of 1.88 mm evaluated on the patient-specific 3D printed phantoms. Surgical navigation was feasible during both surgeries, allowing surgeons to verify the tumor resection margin. Then, we propose and augmented reality navigation system that uses 3D printed surgical guides with a tracking pattern enabling automatic patient-to-image registration in orthopedic oncology. This specific tool fits on the patient only in a pre-designed location, in this case bone tissue. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing’s sarcoma, and then tested during the actual surgical intervention. The results showed that the surgical guide with the reference marker can be placed precisely with an accuracy of 2 mm and a visualization error lower than 3 mm. The application allowed physicians to visualize the skin, bone, tumor and medical images overlaid on the phantom and patient. To enable the use of AR and 3D printing by inexperienced users without broad technical knowledge, we designed a step-by-step methodology. The proposed protocol describes how to develop an AR smartphone application that allows superimposing any patient-based 3D model onto a real-world environment using a 3D printed marker tracked by the smartphone camera. Our solution brings AR solutions closer to the final clinical user, combining free and open-source software with an open-access protocol. The proposed guide is already helping to accelerate the adoption of these technologies by medical professionals and researchers. In the next section of the thesis, we wanted to show the benefits of combining these technologies during different stages of the surgical workflow in orthopedic oncology. We designed a novel AR-based smartphone application that can display the patient’s anatomy and the tumor’s location. A 3D printed reference marker, designed to fit in a unique position of the affected bone tissue, enables automatic registration. The system has been evaluated in terms of visualization accuracy and usability during the whole surgical workflow on six realistic phantoms achieving a visualization error below 3 mm. The AR system was tested in two clinical cases during surgical planning, patient communication, and surgical intervention. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience In the final section, two surgical navigation systems have been developed and evaluated to guide electrode placement in sacral neurostimulation procedures based on optical tracking and augmented reality. Our results show that both systems could minimize patient discomfort and improve surgical outcomes by reducing needle insertion time and number of punctures. Additionally, we proposed a feasible clinical workflow for guiding SNS interventions with both navigation methodologies, including automatically creating sacral virtual 3D models for trajectory definition using artificial intelligence and intraoperative patient-to-image registration. To conclude, in this thesis we have demonstrated that the combination of technologies such as tracking systems, augmented reality, 3D printing, and artificial intelligence overcomes many current limitations in surgical treatments. Our results encourage the medical community to combine these technologies to improve surgical workflows and outcomes in more clinical scenarios.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretaria: María Arrate Muñoz Barrutia.- Vocal: Csaba Pinte

    An Automated Approach of CT Scan Image Processing for Brain Tumor Identification and Evaluation

    Get PDF
    Brain Tumor identification and evaluation requires Computed Tomography (CT) scan and image processing in medical diagnosis. The Manual methods for the detection of abnormal cell growths in brain tissue is both time consuming and non-reliable. This paper initiates with a discussion of a clinical diagnosis case of normal brain tissue and other with tumor affected images. The affected area is identified first with manual approach and further an automated approach is discussed using NI Lab VIEW software for locating the exact position and its evaluation. The described method provides a better way of diagnosing brain tumor in a quick and reliable automated manner. In the view of this, an automatic segmentation of brain MR images is needed to correctly segment White Matter (WM), Cerebrospinal fluid (CSF) and Gray Matter (GM) tissues of brain in a shorter span of time. The manual segmentation of brain tumor is abstruse job and may provide erroneous results

    Treatment Planning Automation for Rectal Cancer Radiotherapy

    Get PDF
    Background Rectal cancer is a common type of cancer. There is an acute health disparity across the globe where a significant population of the world lack adequate access to radiotherapy treatments which is a part of the standard of care for rectal cancers. Safe radiotherapy treatments require specialized planning expertise and are time-consuming and labor-intensive to produce. Purpose: To alleviate the health disparity and promote the safe and quality use of radiotherapy in treating rectal cancers, the entire treatment planning process needs to be automated. The purpose of this project is to develop automated solutions for the treatment planning process of rectal cancers that would produce clinically acceptable and high-quality plans. To achieve this goal, we first automated two common existing treatment techniques, 3DCRT and VMAT, for rectal cancers, and then explored an alternative method for creating a treatment plan using deep learning. Methods: To automate the 3DCRT treatment technique, we used deep learning to predict the shapes of field apertures for primary and boost fields based on CT and location and the shapes of GTV and involved lymph nodes. The results of the predicted apertures were evaluated by a GI radiation oncologist. We then designed an algorithm to automate the forward-planning process with the capacity of adding fields to homogenize the dose at the target volumes using the field-in-field technique. The algorithm was validated on the clinical apertures and the plans produced were scored by a radiation oncologist. The field aperture prediction and the algorithm were combined into an end-to-end process and were tested on a separate set of patients. The resulting final plans were scored by a GI radiation oncologist for their clinical acceptability. To automate of VMAT treatment technique, we used deep learning models to segment CTV and OARs and automated the inverse planning process, based on a RapidPlan model. The end-to-end process requires only the GTV contour and a CT scan as inputs. Specifically, the segmentation models could auto-segment CTV, bowel bag, large bowel, small bowel, total bowel, femurs, bladder, bone marrow, and female and male genitalia. All the OARs were contoured under the guidance of and reviewed by a GI radiation oncologist. For auto-planning, the RapidPlan model was designed for VMAT delivery with 3 arcs and validated separately by two GI radiation oncologists. Finally, the end-to-end pipeline was evaluated on a separate set of testing patients, and the resulting plans were scored by two GI radiation oncologists. Existing inverse planning methods rely on 1D information from DVH values,2D information from DVH lines,or 3D dose distributions using machine learning for plan optimizations. The project explored the possibility of using deep learning to create 3D dose distributions directly for VMAT treatment plans. The training data consisted of patients treated by the VMAT treatment technique in the short-course fractionation scheme that uses 5 Gy per fraction for 5 fractions. Two deep learning architectures were investigated for their ability to emulate clinical dose distributions: 3D DDUNet and 2D cGAN. The top-performing model for each architecture was identified based on the difference in DVH values, DVH lines, and dose distribution between the predicted dose and the corresponding clinical plans. Results: For 3DCRT automation, the predicted apertures were 100%, 95%, and 87.5% clinically acceptable for the posterior-anterior, laterals, and boost apertures, respectively. The forward planning algorithm created wedged plans that were 85% clinically acceptable with clinical apertures. The end-to-end workflow generated 97% clinically acceptable plans for the separate test patients. For the VMAT automation, CTV contours were 89% clinically acceptable without necessary modifications and all the OAR contours were clinically acceptable without edits except for large and small bowels. The RaidPlan model was evaluated to produce 100% and 91% of clinically acceptable plans per two GI radiation oncologists. For the testing of end-to-end workflow, 88% and 62% of the final plans were accepted by two GI radiation oncologists. For the evaluation of deep learning architectures, the top-performing model of the DDUNet architecture used the medium patch size and inputs of CT, PTV times prescription dose mask, CTV, PTV 10 mm expansion, and the external body structure. The model with inputs CT, PTV, and CTV masks performed the best for the cGAN architecture. Both the DDUNet and cGAN architectures could predict 3D dose distributions that had DVH values that were statistically the same as the clinical plans. Conclusions: We have successfully automated the clinical workflow for generating either 3DCRT or VMAT radiotherapy plans for rectal cancer for our institution. This project showed that the existing treatment planning techniques for rectal cancer can be automated to generate clinically acceptable and safe plans with minimal inputs and no human intervention for most patients. The project also showed that deep learning architectures can be used for predicting dose distributions
    • …
    corecore