18 research outputs found

    Derivation of a test statistic for emphysema quantification

    Get PDF
    Density masking is the de-facto quantitative imaging phenotype for emphysema that is widely used by the clinical community. Density masking defines the burden of emphysema by a fixed threshold, usually between -910 HU and -950 HU, that has been experimentally validated with histology. In this work, we formalized emphysema quantification by means of statistical inference. We show that a non-central Gamma is a good approximation for the local distribution of image intensities for normal and emphysema tissue. We then propose a test statistic in terms of the sample mean of a truncated noncentral Gamma random variable. Our results show that this approach is well-suited for the detection of emphysema and superior to standard density masking. The statistical method was tested in a dataset of 1337 samples obtained from 9 different scanner models in subjects with COPD. Results showed an increase of 17% when compared to the density masking approach, and an overall accuracy of 94.09%

    A clinical method for mapping and quantifying blood stasis in the left ventricle

    Get PDF
    In patients at risk of intraventrcular thrombosis, the benefits of chronic anticoagulation therapy need to be balanced with the pro-hemorrhagic effects of therapy. Blood stasis in the cardiac chambers is a recognized risk factor for intracardiac thrombosis and potential cardiogenic embolic events. In this work, we present a novel flow image-based method to assess the location and extent of intraventricular stasis regions inside the left ventricle (LV) by digital processing flow-velocity images obtained either by phase-contrast magnetic resonance (PCMR) or 2D color-Doppler velocimetry (echo-CDV). This approach is based on quantifying the distribution of the blood Residence Time (TR) from time-resolved blood velocity fields in the LV. We tested the new method in illustrative examples of normal hearts, patients with dilated cardiomyopathy and one patient before and after the implantation of a left ventricular assist device (LVAD). The method allowed us to assess in-vivo the location and extent of the stasis regions in the LV. Original metrics were developed to integrate flow properties into simple scalars suitable for a robust and personalized assessment of the risk of thrombosis. From a clinical perspective, this work introduces the new paradigm that quantitative flow dynamics can provide the basis to obtain subclinical markers of intraventricular thrombosis risk. The early prediction of LV blood stasis may result in decrease strokes by appropriate use of anticoagulant therapy for the purpose of primary and secondary prevention. It may also have a significant impact on LVAD device design and operation set-up

    Detección y clasificación de enfisema pulmonar en imágenes de TAC mediante Redes Neuronales Convolucionales Multiescala

    Full text link
    En este trabajo proponemos y validamos una herramienta para el reconocimiento de patrones de enfisema pulmonar, fenotipo principal de la Enfermedad Pulmonar Obstructiva Crónica (EPOC) en imágenes de TAC. El método propuesto se basa en un Red Neuronal Convolucional (CNN) Multiescala diseñada para la detección y clasificación de 6 clases de tejido pulmonar, incluyendo 5 patrones de enfisema y tejido normal. La red propuesta consta de 4 capas convolucionales y 3 de submuestreo, y la entrada a la misma corresponde a una representación multiescala de la imagen pulmonar a clasificar. Dicho método ha sido entrenado y validado con un conjunto de datos de 1337 muestras provenientes de 267 escáneres de TAC pulmonar

    Harmonization of chest CT scans for different doses and reconstruction methods

    No full text
    Purpose: To develop and validate a computed tomography (CT) harmonization technique by combining noise-stabilization and autocalibration methodologies to provide reliable densitometry measurements in heterogeneous acquisition protocols. Methods: We propose to reduce the effects of spatially variant noise such as nonuniform patterns of noise and biases. The method combines the statistical characterization of the signal-to-noise relationship in the CT image intensities, which allows us to estimate both the signal and spatially variant variance of noise, with an autocalibration technique that reduces the nonuniform biases caused by noise and reconstruction techniques. The method is firstly validated with anthropomorphic synthetic images that simulate CT acquisitions with variable scanning parameters: different dosage, nonhomogeneous variance of noise, and various reconstruction methods. We finally evaluate these effects and the ability of our method to provide consistent densitometric measurements in a cohort of clinical chest CT scans from two vendors (Siemens, n = 54 subjects; and GE, n = 50 subjects) acquired with several reconstruction algorithms (filtered back-projection and iterative reconstructions) with high-dose and low-dose protocols. Results: The harmonization reduces the effect of nonhomogeneous noise without compromising the resolution of the images (25% RMSE reduction in both clinical datasets). An analysis through hierarchical linear models showed that the average biases induced by differences in dosage and reconstruction methods are also reduced up to 74.20%, enabling comparable results between high-dose and low-dose reconstructions. We also assessed the statistical similarity between acquisitions obtaining increases of up to 30% points and showing that the low-dose vs high-dose comparisons of harmonized data obtain similar and even higher similarity than the observed for high-dose vs high-dose comparisons of nonharmonized data. Conclusion: The proposed harmonization technique allows to compare measures of low-dose with high-dose acquisitions without using a specific reconstruction as a reference. Since the harmonization does not require a precalibration with a phantom, it can be applied to retrospective studies. This approach might be suitable for multicenter trials for which a reference reconstruction is not feasible or hard to define due to differences in vendors, models, and reconstruction techniques

    Statistical characterization of noise for spatial standardization of CT scans: Enabling comparison with multiple kernels and doses

    Full text link
    Computerized tomography (CT) is a widely adopted modality for analyzing directly or indirectly functional, biological and morphological processes by means of the image characteristics. However, the potential utilization of the information obtained from CT images is often limited when considering the analysis of quantitative information involving different devices, acquisition protocols or reconstruction algorithms. Although CT scanners are calibrated as a part of the imaging workflow, the calibration is circumscribed to global reference values and does not circumvent problems that are inherent to the imaging modality. One of them is the lack of noise stationarity, which makes quantitative biomarkers extracted from the images less robust and stable. Some methodologies have been proposed for the assessment of non-stationary noise in reconstructed CT scans. However, those methods focused on the non-stationarity only due to the reconstruction geometry and are mainly based on the propagation of the variance of noise throughout the whole reconstruction process. Additionally, the philosophy followed in the state-of-the-art methods is based on the reduction of noise, but not in the standardization of it. This means that, even if the noise is reduced, the statistics of the signal remain non-stationary, which is insufficient to enable comparisons between different acquisitions with different statistical characteristics. In this work, we propose a statistical characterization of noise in reconstructed CT scans that leads to a versatile statistical model that effectively characterizes different doses, reconstruction kernels, and devices. The statistical model is generalized to deal with the partial volume effect via a localized mixture model that also describes the non-stationarity of noise. Finally, we propose a stabilization scheme to achieve stationary variance. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed its suitability to enable comparisons with different doses, and acquisition protocol

    Harmonization of chest CT scans for different doses and reconstruction methods

    Full text link
    Purpose: To develop and validate a computed tomography (CT) harmonization technique by combining noise-stabilization and autocalibration methodologies to provide reliable densitometry measurements in heterogeneous acquisition protocols. Methods: We propose to reduce the effects of spatially variant noise such as nonuniform patterns of noise and biases. The method combines the statistical characterization of the signal-to-noise relationship in the CT image intensities, which allows us to estimate both the signal and spatially variant variance of noise, with an autocalibration technique that reduces the nonuniform biases caused by noise and reconstruction techniques. The method is firstly validated with anthropomorphic synthetic images that simulate CT acquisitions with variable scanning parameters: different dosage, nonhomogeneous variance of noise, and various reconstruction methods. We finally evaluate these effects and the ability of our method to provide consistent densitometric measurements in a cohort of clinical chest CT scans from two vendors (Siemens, n = 54 subjects; and GE, n = 50 subjects) acquired with several reconstruction algorithms (filtered back-projection and iterative reconstructions) with high-dose and low-dose protocols. Results: The harmonization reduces the effect of nonhomogeneous noise without compromising the resolution of the images (25% RMSE reduction in both clinical datasets). An analysis through hierarchical linear models showed that the average biases induced by differences in dosage and reconstruction methods are also reduced up to 74.20%, enabling comparable results between high-dose and low-dose reconstructions. We also assessed the statistical similarity between acquisitions obtaining increases of up to 30% points and showing that the low-dose vs high-dose comparisons of harmonized data obtain similar and even higher similarity than the observed for high-dose vs high-dose comparisons of nonharmonized data. Conclusion: The proposed harmonization technique allows to compare measures of low-dose with high-dose acquisitions without using a specific reconstruction as a reference. Since the harmonization does not require a precalibration with a phantom, it can be applied to retrospective studies. This approach might be suitable for multicenter trials for which a reference reconstruction is not feasible or hard to define due to differences in vendors, models, and reconstruction techniques

    A SR-Net 3D-to-2D architecture for paraseptal emphysema segmentation

    Full text link
    Paraseptal emphysema (PSE) is a relatively unexplored emphysema subtype that is usually asymptomatic, but recently associated with interstitial lung abnormalities which are related with clinical outcomes, including mortality. Previous local-based methods for emphysema subtype quantification do not properly characterize PSE. This is in part for their inability to properly capture the global aspect of the disease, as some the PSE lesions can involved large regions along the chest wall. It is our assumption, that path-based approaches are not well-suited to identify this subtype and segmentation is a better paradigm. In this work we propose and introduce the Slice-Recovery network (SR-Net) that leverages 3D contextual information for 2D segmentation of PSE lesions in CT images. For that purpose, a novel convolutional network architecture is presented, which follows an encoding-decoding path that processes a 3D volume to generate a 2D segmentation map. The dataset used for training and testing the method comprised 664 images, coming from 111 CT scans. The results demonstrate the benefit of the proposed approach which incorporate 3D context information to the network and the ability of the proposed method to identify and segment PSE lesions with different sizes even in the presence of other emphysema subtypes in an advanced stage

    Deep-learning strategy for pulmonary artery-vein classification of non-contrast CT images

    Full text link
    Artery-vein classification on pulmonary computed tomography (CT) images is becoming of high interest in the scientific community due to the prevalence of pulmonary vascular disease that affects arteries and veins through different mechanisms. In this work, we present a novel approach to automatically segment and classify vessels from chest CT images. We use a scale-space particle segmentation to isolate vessels, and combine a convolutional neural network (CNN) to graph-cut (GC) to classify the single particles. Information about proximity of arteries to airways is learned by the network by means of a bronchus enhanced image. The methodology is evaluated on the superior and inferior lobes of the right lung of twenty clinical cases. Comparison with manual classification and a Random Forests (RF) classifier is performed. The algorithm achieves an overall accuracy of 87% when compared to manual reference, which is higher than the 73% accuracy achieved by RF

    A graph-cut approach for pulmonary artery-vein segmentation in noncontrast CT images

    Full text link
    Lung vessel segmentation has been widely explored by the biomedical image processing community; however, the differentiation of arterial from venous irrigation is still a challenge. Pulmonary artery–vein (AV) segmentation using computed tomography (CT) is growing in importance owing to its undeniable utility in multiple cardiopulmonary pathological states, especially those implying vascular remodelling, allowing the study of both flow systems separately. We present a new framework to approach the separation of tree-like structures using local information and a specifically designed graph-cut methodology that ensures connectivity as well as the spatial and directional consistency of the derived subtrees. This framework has been applied to the pulmonary AV classification using a random forest (RF) pre-classifier to exploit the local anatomical differences of arteries and veins. The evaluation of the system was performed using 192 bronchopulmonary segment phantoms, 48 anthropomorphic pulmonary CT phantoms, and 26 lungs from noncontrast CT images with precise voxel-based reference standards obtained by manually labelling the vessel trees. The experiments reveal a relevant improvement in the accuracy ( ∼ 20%) of the vessel particle classification with the proposed framework with respect to using only the pre-classification based on local information applied to the whole area of the lung under study. The results demonstrated the accurate differentiation between arteries and veins in both clinical and synthetic cases, specifically when the image quality can guarantee a good airway segmentation, which opens a huge range of possibilities in the clinical study of cardiopulmonary diseases

    Integración de escáner de superficie con un sistema de posicionamiento electromagnético para el guiado en cirugía de cáncer de mama

    Full text link
    El procedimiento actual para cirugías de cáncer de mama incluye en la mayoría de los casos el empleo de un sistema aguja-arpón como herramienta de guiado durante la intervención. Aunque su uso está ampliamente extendido, presenta ciertas desventajas como el aumento en coste y tiempo, la incomodidad para el paciente o los posibles resultados estéticos. Por ello existen diferentes alternativas, entre las cuales se encuentra el uso de imágenes preoperatorias como herramienta de apoyo durante la cirugía. Sin embargo, su adquisición se lleva a cabo con el paciente en diferentes posiciones a la encontrada en la intervención, dificultando su interpretación. En este trabajo se presenta un procedimiento para la adquisición de la superficie del paciente durante la cirugía a partir de un escáner de luz estructurada combinado con el uso de un sistema de posicionamiento electromagnético para la localización del tumor. La información proporcionada por ambas técnicas se fusiona mediante un registro, empleando para ello unos marcadores diseñados específicamente para ser localizables por ambas. Tras una evaluación de la precisión, los resultados demuestran la viabilidad del procedimiento con errores inferiores a 1 mm
    corecore