79 research outputs found

    Correlation of pre-operative cancer imaging techniques with post-operative gross and microscopic pathology images

    Get PDF
    In this paper, different algorithms for volume reconstruction from tomographic cross-sectional pathology slices are described and tested. A tissue-mimicking phantom made with a mixture of agar and aluminium oxide was sliced at different thickness as per pathological standard guidelines. Phantom model was also virtually sliced and reconstructed in software. Results showed that shape-based spline interpolation method was the most precise, but generated a volume underestimation of 0.5%

    New disagreement metrics incorporating spatial detail – applications to lung imaging

    Get PDF
    Evaluation of medical image segmentation is increasingly important. While set-based agreement metrics are widespread, they assess the absolute overlap, but fail to account for any spatial information related to the differences or to the shapes being analyzed. In this paper, we propose a family of new metrics that can be tailored to deal with a broad class of assessment needs

    Im2mesh: A Python Library to Reconstruct 3D Meshes from Scattered Data and 2D Segmentations, Application to Patient-Specific Neuroblastoma Tumour Image Sequences

    Get PDF
    The future of personalised medicine lies in the development of increasingly sophisticated digital twins, where the patient-specific data is fed into predictive computational models that support the decisions of clinicians on the best therapies or course actions to treat the patient’s afflictions. The development of these personalised models from image data requires a segmentation of the geometry of interest, an estimation of intermediate or missing slices, a reconstruction of the surface and generation of a volumetric mesh and the mapping of the relevant data into the reconstructed three-dimensional volume. There exist a wide number of tools, including both classical and artificial intelligence methodologies, that help to overcome the difficulties in each stage, usually relying on the combination of different software in a multistep process. In this work, we develop an all-in-one approach wrapped in a Python library called im2mesh that automatizes the whole workflow, which starts reading a clinical image and ends generating a 3D finite element mesh with the interpolated patient data. In this work, we apply this workflow to an example of a patient-specific neuroblastoma tumour. The main advantages of our tool are its straightforward use and its easy integration into broader pipelines

    Performance of a 3D convolutional neural network in the detection of hypoperfusion at CT pulmonary angiography in patients with chronic pulmonary embolism : a feasibility study

    Get PDF
    Background Chronic pulmonary embolism (CPE) is a life-threatening disease easily misdiagnosed on computed tomography. We investigated a three-dimensional convolutional neural network (CNN) algorithm for detecting hypoperfusion in CPE from computed tomography pulmonary angiography (CTPA). Methods Preoperative CTPA of 25 patients with CPE and 25 without pulmonary embolism were selected. We applied a 48%-12%-40% training-validation-testing split (12 positive and 12 negative CTPA volumes for training, 3 positives and 3 negatives for validation, 10 positives and 10 negatives for testing). The median number of axial images per CTPA was 335 (min-max, 111-570). Expert manual segmentations were used as training and testing targets. The CNN output was compared to a method in which a Hounsfield unit (HU) threshold was used to detect hypoperfusion. Receiver operating characteristic area under the curve (AUC) and Matthew correlation coefficient (MCC) were calculated with their 95% confidence interval (CI). Results The predicted segmentations of CNN showed AUC 0.87 (95% CI 0.82-0.91), those of HU-threshold method 0.79 (95% CI 0.74-0.84). The optimal global threshold values were CNN output probability >= 0.37 andPeer reviewe

    Improved neonatal brain MRI segmentation by interpolation of motion corrupted slices

    Get PDF
    BACKGROUND AND PURPOSE: To apply and evaluate an intensity‐based interpolation technique, enabling segmentation of motion‐affected neonatal brain MRI. METHODS: Moderate‐late preterm infants were enrolled in a prospective cohort study (Brain Imaging in Moderate‐late Preterm infants “BIMP‐study”) between August 2017 and November 2019. T2‐weighted MRI was performed around term equivalent age on a 3T MRI. Scans without motion (n = 27 [24%], control group) and with moderate‐severe motion (n = 33 [29%]) were included. Motion‐affected slices were re‐estimated using intensity‐based shape‐preserving cubic spline interpolation, and automatically segmented in eight structures. Quality of interpolation and segmentation was visually assessed for errors after interpolation. Reliability was tested using interpolated control group scans (18/54 axial slices). Structural similarity index (SSIM) was used to compare T2‐weighted scans, and SĂžrensen‐Dice was used to compare segmentation before and after interpolation. Finally, volumes of brain structures of the control group were used assessing sensitivity (absolute mean fraction difference) and bias (confidence interval of mean difference). RESULTS: Visually, segmentation of 25 scans (22%) with motion artifacts improved with interpolation, while segmentation of eight scans (7%) with adjacent motion‐affected slices did not improve. Average SSIM was .895 and SĂžrensen‐Dice coefficients ranged between .87 and .97. Absolute mean fraction difference was ≀0.17 for less than or equal to five interpolated slices. Confidence intervals revealed a small bias for cortical gray matter (0.14‐3.07 cm(3)), cerebrospinal fluid (0.39‐1.65 cm(3)), deep gray matter (0.74‐1.01 cm(3)), and brainstem volumes (0.07‐0.28 cm(3)) and a negative bias in white matter volumes (–4.47 to –1.65 cm(3)). CONCLUSION: According to qualitative and quantitative assessment, intensity‐based interpolation reduced the percentage of discarded scans from 29% to 7%

    Surface reconstruction using variational interpolation

    Get PDF
    Surface reconstruction of anatomical structures is an integral part of medical modeling. Contour information is extracted from serial cross-sections of tissue data and is stored as "slice" files. Although there are several reasonably efficient triangulation algorithms that reconstruct surfaces from slice data, the models generated from them have a jagged or faceted appearance due to the large inter-slice distance created by the sectioning process. Moreover, inconsistencies in user input aggravate the problem. So, we created a method that reduces inter-slice distance, as well as ignores the inconsistencies in the user input. Our method called the piecewise weighted implicit functions, is based on the approach of weighting smaller implicit functions. It takes only a few slices at a time to construct the implicit function. This method is based on a technique called variational interpolation. Other approaches based on variational interpolation have the disadvantage of becoming unstable when the model is quite large with more than a few thousand constraint points. Furthermore, tracing the intermediate contours becomes expensive for large models. Even though some fast fitting methods handle such instability problems, there is no apparent improvement in contour tracing time, because, the value of each data point on the contour boundary is evaluated using a single large implicit function that essentially uses all constraint points. Our method handles both these problems using a sliding window approach. As our method uses only a local domain to construct each implicit function, it achieves a considerable run-time saving over the other methods. The resulting software produces interpolated models from large data sets in a few minutes on an ordinary desktop computer

    Integrating images from a moveable tracked display of three-dimensional data

    Get PDF
    abstract: This paper describes a novel method for displaying data obtained by three-dimensional medical imaging, by which the position and orientation of a freely movable screen are optically tracked and used in real time to select the current slice from the data set for presentation. With this method, which we call a “freely moving in-situ medical image”, the screen and imaged data are registered to a common coordinate system in space external to the user, at adjustable scale, and are available for free exploration. The three-dimensional image data occupy empty space, as if an invisible patient is being sliced by the moving screen. A behavioral study using real computed tomography lung vessel data established the superiority of the in situ display over a control condition with the same free exploration, but displaying data on a fixed screen (ex situ), with respect to accuracy in the task of tracing along a vessel and reporting spatial relations between vessel structures. A “freely moving in-situ medical image” display appears from these measures to promote spatial navigation and understanding of medical data.The electronic version of this article is the complete one and can be found online at: http://cognitiveresearchjournal.springeropen.com/articles/10.1186/s41235-017-0069-

    Estimation of Cerebral Physiology and Hemodynamics via Near-Infrared Spectroscopy

    Get PDF
    Near-infrared spectroscopy (NIRS) is a non-invasive optical imaging technique that has rapidly been gaining popularity for study of the brain. Near-infrared spectroscopy measures absorption of light, primarily due to hemoglobin, through an array of light sources and detectors that are coupled to the scalp. Measurements can generally be divided into measurements of baseline physiology (related to total absorption) and measurements of hemodynamic time-series data (related to relative absorption changes). Because light intensity drops off rapidly with depth, NIRS measurements are highly sensitive to extracerebral tissues. Attempts to recover baseline physiology measurements of the brain can be confounded by high sensitivity to the scalp and skull. Time-series measurements contain high contributions of systemic physiology signals, including cardiac, respiratory, and blood pressure waves. Furthermore, measurements over time inevitably introduce artifacts due to subject motion. The aim of this thesis was to develop improved analysis methods in the context of these NIRS specific confounding factors. The thesis consists of four articles that address specific issues in NIRS data analysis: (i) assessment of common data analysis procedures used to estimate oxygen saturation and hemoglobin content that assume a semi-infinite, homogeneous medium, (ii) testing the feasibility of improving oxygen saturation and hemoglobin measurements using multi-layered models, (iii) development of methods to estimate the general linear model for functional brain imaging that are robust to systemic physiology signals and motion artifacts, and (iv) the extension of (iii) to an adaptive method that is suitable for real-time analysis. Overall, this thesis helps to validate and advance analysis methods for NIRS

    VRContour: Bringing Contour Delineations of Medical Structures Into Virtual Reality

    Full text link
    Contouring is an indispensable step in Radiotherapy (RT) treatment planning. However, today's contouring software is constrained to only work with a 2D display, which is less intuitive and requires high task loads. Virtual Reality (VR) has shown great potential in various specialties of healthcare and health sciences education due to the unique advantages of intuitive and natural interactions in immersive spaces. VR-based radiation oncology integration has also been advocated as a target healthcare application, allowing providers to directly interact with 3D medical structures. We present VRContour and investigate how to effectively bring contouring for radiation oncology into VR. Through an autobiographical iterative design, we defined three design spaces focused on contouring in VR with the support of a tracked tablet and VR stylus, and investigating dimensionality for information consumption and input (either 2D or 2D + 3D). Through a within-subject study (n = 8), we found that visualizations of 3D medical structures significantly increase precision, and reduce mental load, frustration, as well as overall contouring effort. Participants also agreed with the benefits of using such metaphors for learning purposes.Comment: C. Chen, M. Yarmand, V. Singh, M.V. Sherer, J.D. Murphy, Y. Zhang and N. Weibel, "VRContour: Bringing Contour Delineations of Medical Structures Into Virtual Reality", 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2022, pp. 1-10, doi: 10.1109/ISMAR55827.2022.0002

    Optimal method for fetal brain age prediction using multiplanar slices from structural magnetic resonance imaging

    Get PDF
    The accurate prediction of fetal brain age using magnetic resonance imaging (MRI) may contribute to the identification of brain abnormalities and the risk of adverse developmental outcomes. This study aimed to propose a method for predicting fetal brain age using MRIs from 220 healthy fetuses between 15.9 and 38.7 weeks of gestational age (GA). We built a 2D single-channel convolutional neural network (CNN) with multiplanar MRI slices in different orthogonal planes without correction for interslice motion. In each fetus, multiple age predictions from different slices were generated, and the brain age was obtained using the mode that determined the most frequent value among the multiple predictions from the 2D single-channel CNN. We obtained a mean absolute error (MAE) of 0.125 weeks (0.875 days) between the GA and brain age across the fetuses. The use of multiplanar slices achieved significantly lower prediction error and its variance than the use of a single slice and a single MRI stack. Our 2D single-channel CNN with multiplanar slices yielded a significantly lower stack-wise MAE (0.304 weeks) than the 2D multi-channel (MAE = 0.979
    • 

    corecore