2,471 research outputs found

    Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions

    Full text link
    Heavy smokers undergoing screening with low-dose chest CT are affected by cardiovascular disease as much as by lung cancer. Low-dose chest CT scans acquired in screening enable quantification of atherosclerotic calcifications and thus enable identification of subjects at increased cardiovascular risk. This paper presents a method for automatic detection of coronary artery, thoracic aorta and cardiac valve calcifications in low-dose chest CT using two consecutive convolutional neural networks. The first network identifies and labels potential calcifications according to their anatomical location and the second network identifies true calcifications among the detected candidates. This method was trained and evaluated on a set of 1744 CT scans from the National Lung Screening Trial. To determine whether any reconstruction or only images reconstructed with soft tissue filters can be used for calcification detection, we evaluated the method on soft and medium/sharp filter reconstructions separately. On soft filter reconstructions, the method achieved F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta, aortic valve and mitral valve calcifications, respectively. On sharp filter reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively. Linearly weighted kappa coefficients for risk category assignment based on per subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter reconstructions, respectively. These results demonstrate that the presented method enables reliable automatic cardiovascular risk assessment in all low-dose chest CT scans acquired for lung cancer screening

    Imaging Biomarkers for Carotid Artery Atherosclerosis

    Get PDF

    Imaging Biomarkers for Carotid Artery Atherosclerosis

    Get PDF

    Automated Vascular Smooth Muscle Segmentation, Reconstruction, Classification and Simulation on Whole-Slide Histology

    Get PDF
    Histology of the microvasculature depicts detailed characteristics relevant to tissue perfusion. One important histologic feature is the smooth muscle component of the microvessel wall, which is responsible for controlling vessel caliber. Abnormalities can cause disease and organ failure, as seen in hypertensive retinopathy, diabetic ischemia, Alzheimer’s disease and improper cardiovascular development. However, assessments of smooth muscle cell content are conventionally performed on selected fields of view on 2D sections, which may lead to measurement bias. We have developed a software platform for automated (1) 3D vascular reconstruction, (2) detection and segmentation of muscularized microvessels, (3) classification of vascular subtypes, and (4) simulation of function through blood flow modeling. Vessels were stained for α-actin using 3,3\u27-Diaminobenzidine, assessing both normal (n=9 mice) and regenerated vasculature (n=5 at day 14, n=4 at day 28). 2D locally adaptive segmentation involved vessel detection, skeletonization, and fragment connection. 3D reconstruction was performed using our novel nucleus landmark-based registration. Arterioles and venules were categorized using supervised machine learning based on texture and morphometry. Simulation of blood flow for the normal and regenerated vasculature was performed at baseline and during demand based on the structural measures obtained from the above tools. Vessel medial area and vessel wall thickness were found to be greater in the normal vasculature as compared to the regenerated vasculature (p\u3c0.001) and a higher density of arterioles was found in the regenerated tissue (p\u3c0.05). Validation showed: a Dice coefficient of 0.88 (compared to manual) for the segmentations, a 3D reconstruction target registration error of 4 μm, and area under the receiver operator curve of 0.89 for vessel classification. We found 89% and 67% decreases in the blood flow through the network for the regenerated vasculature during increased oxygen demand as compared to the normal vasculature, respectively for 14 and 28 days post-ischemia. We developed a software platform for automated vasculature histology analysis involving 3D reconstruction, segmentation, and arteriole vs. venule classification. This advanced the knowledge of conventional histology sampling compared to whole slide analysis, the morphological and density differences in the regenerated vasculature, and the effect of the differences on blood flow and function

    Fetal whole-heart 4D imaging using motion-corrected multi-planar real-time MRI

    Get PDF
    Purpose: To develop a MRI acquisition and reconstruction framework for volumetric cine visualisation of the fetal heart and great vessels in the presence of maternal and fetal motion. Methods: Four-dimensional depiction was achieved using a highly-accelerated multi-planar real-time balanced steady state free precession acquisition combined with retrospective image-domain techniques for motion correction, cardiac synchronisation and outlier rejection. The framework was evaluated and optimised using a numerical phantom, and evaluated in a study of 20 mid- to late-gestational age human fetal subjects. Reconstructed cine volumes were evaluated by experienced cardiologists and compared with matched ultrasound. A preliminary assessment of flow-sensitive reconstruction using the velocity information encoded in the phase of dynamic images is included. Results: Reconstructed cine volumes could be visualised in any 2D plane without the need for highly-specific scan plane prescription prior to acquisition or for maternal breath hold to minimise motion. Reconstruction was fully automated aside from user-specified masks of the fetal heart and chest. The framework proved robust when applied to fetal data and simulations confirmed that spatial and temporal features could be reliably recovered. Expert evaluation suggested the reconstructed volumes can be used for comprehensive assessment of the fetal heart, either as an adjunct to ultrasound or in combination with other MRI techniques. Conclusion: The proposed methods show promise as a framework for motion-compensated 4D assessment of the fetal heart and great vessels

    Computer-aided Analysis and Interpretation of HRCT Images of the Lung

    Get PDF

    Coronary Artery Calcium Quantification in Contrast-enhanced Computed Tomography Angiography

    Get PDF
    Coronary arteries are the blood vessels supplying oxygen-rich blood to the heart muscles. Coronary artery calcium (CAC), which is the total amount of calcium deposited in these arteries, indicates the presence or the future risk of coronary artery diseases. Quantification of CAC is done by using computed tomography (CT) scan which uses attenuation of x-ray by different tissues in the body to generate three-dimensional images. Calcium can be easily spotted in the CT images because of its higher opacity to x-ray compared to that of the surrounding tissue. However, the arteries cannot be identified easily in the CT images. Therefore, a second scan is done after injecting a patient with an x-ray opaque dye known as contrast material which makes different chambers of the heart and the coronary arteries visible in the CT scan. This procedure is known as computed tomography angiography (CTA) and is performed to assess the morphology of the arteries in order to rule out any blockage in the arteries. The CT scan done without the use of contrast material (non-contrast-enhanced CT) can be eliminated if the calcium can be quantified accurately from the CTA images. However, identification of calcium in CTA images is difficult because of the proximity of the calcium and the contrast material and their overlapping intensity range. In this dissertation first we compare the calcium quantification by using a state-of-the-art non-contrast-enhanced CT scan method to conventional methods suggesting optimal quantification parameters. Then we develop methods to accurately quantify calcium from the CTA images. The methods include novel algorithms for extracting centerline of an artery, calculating the threshold of calcium adaptively based on the intensity of contrast along the artery, calculating the amount of calcium in mixed intensity range, and segmenting the artery and the outer wall. The accuracy of the calcium quantification from CTA by using our methods is higher than the non-contrast-enhanced CT thus potentially eliminating the need of the non-contrast-enhanced CT scan. The implications are that the total time required for the CT scan procedure, and the patient\u27s exposure to x-ray radiation are reduced

    Testing SPECT Motion Correction Algorithms

    Get PDF
    Frequently, testing of Single Photon Emission Computed Tomography (SPECT) motion correction algorithms is done either by using simplistic deformations that do not accurately simulate true patient motion or by applying the algorithms directly to data acquired from a real patient, where the true internal motion is unknown. In this work, we describe a way to combine these two approaches by using imaging data acquired from real volunteers to simulate the data that the motion correction algorithms would normally observe. The goal is to provide an assessment framework which can both: simulate realistic SPECT acquisitions that incorporate realistic body deformations and provide a ground truth volume to compare against. Every part of the motion correction algorithm needs to be exercised: from parameter estimation of the motion model, to the final reconstruction results. In order to build the ground truth anthropomorphic numerical phantoms, we acquire high resolution MRI scans and motion observation data of a volunteer in multiple different configurations. We then extract the organ boundaries using thresholding, active contours, and morphology. Phantoms of radioactivity uptake and density inside the body can be generated from these boundaries to be used to simulate SPECT acquisitions. We present results on extraction of the ribs, lungs, heart, spine, and the rest of the soft tissue in the thorax using our segmentation approach. In general, extracting the lungs, heart, and ribs in images that do not contain the spine works well, but the spine could be better extracted using other methods that we discuss. We also go in depth into the software development component of this work, describing the C++ coding framework we used and the High Level Interactive GUI Language (HLING). HLING solved a lot of problems but introduced a fair bit of its own. We include a set of requirements to provide a foundation for the next attempt at developing a declarative and minimally restrictive methodology for writing interactive image processing applications in C++ based on lessons learned during the development of HLING
    • …
    corecore