11 research outputs found

    Threshold Selection Criteria for Quantification of Lumbosacral Cerebrospinal Fluid and Root Volumes from MRI

    Get PDF
    BACKGROUND AND PURPOSE: The high variability of CSF volumes partly explains the inconsistency of anesthetic effects, but may also be due to image analysis itself. In this study, criteria for threshold selection are anatomically defined. METHODS: T2 MR images (n = 7 cases) were analyzed using 3-dimentional software. Maximal-minimal thresholds were selected in standardized blocks of 50 slices of the dural sac ending caudally at the L5-S1 intervertebral space (caudal blocks) and middle L3 (rostral blocks). Maximal CSF thresholds: threshold value was increased until at least one voxel in a CSF area appeared unlabeled and decreased until that voxel was labeled again: this final threshold was selected. Minimal root thresholds: thresholds values that selected cauda equina root area but not adjacent gray voxels in the CSF-root interface were chosen. RESULTS: Significant differences were found between caudal and rostral thresholds. No significant differences were found between expert and nonexpert observers. Average max/min thresholds were around 1.30 but max/min CSF volumes were around 1.15. Great interindividual CSF volume variability was detected (max/min volumes 1.6-2.7). CONCLUSIONS: The estimation of a close range of CSF volumes which probably contains the real CSF volume value can be standardized and calculated prior to certain intrathecal procedures

    Partial Volume Correction in Quantitative Amyloid Imaging.

    Get PDF
    Amyloid imaging is a valuable tool for research and diagnosis in dementing disorders. As positron emission tomography (PET) scanners have limited spatial resolution, measured signals are distorted by partial volume effects. Various techniques have been proposed for correcting partial volume effects, but there is no consensus as to whether these techniques are necessary in amyloid imaging, and, if so, how they should be implemented. We evaluated a two-component partial volume correction technique and a regional spread function technique using both simulated and human Pittsburgh compound B (PiB) PET imaging data. Both correction techniques compensated for partial volume effects and yielded improved detection of subtle changes in PiB retention. However, the regional spread function technique was more accurate in application to simulated data. Because PiB retention estimates depend on the correction technique, standardization is necessary to compare results across groups. Partial volume correction has sometimes been avoided because it increases the sensitivity to inaccuracy in image registration and segmentation. However, our results indicate that appropriate PVC may enhance our ability to detect changes in amyloid deposition

    Review of Segmentation Methods for Brain Tissue with Magnetic Resonance Images

    Full text link

    Segmentation of Brain MRI

    Get PDF

    A Graph theoretic approach to quantifying grey matter volume in neuroimaging

    Get PDF
    Brain atrophy occurs as a symptom of many diseases. The software package, Statistical Parametric Mapping (SPM) is one of the most respected and commonly used tools in the neuroimaging community for quantifying the amount of grey matter (GM) in the brain based on magnetic resonance (MR) images. One aspect of quantifying GM volume is to identify, or segment, regions of the brain image corresponding to grey matter. A recent trend in the field of image segmentation is to model an image as a graph composed of vertices and edges, and then to cut the graph into subgraphs corresponding to different segments. In this thesis, we incorporate image segmentation algorithms based on graph-cuts into a GM volume estimation system, and then we compare the GM volume estimates with those achieved via SPM. To aid in this comparison, we use 20 T1-weighted normal brain MR images simulated using BrainWeb[1] [2]. We obtained results verifying the graph-cuts technique better approximated the GM volumes by halving the error resulting from SPM preprocessing

    Predicting Alzheimer's disease by segmenting and classifying 3D-brain MRI images using clustering technique and SVM classifiers.

    Get PDF
    Alzheimer's disease (AD) is the most common form of dementia affecting seniors age 65 and over. When AD is suspected, the diagnosis is usually confirmed with behavioural assessments and cognitive tests, often followed by a brain scan. Advanced medical imaging and pattern recognition techniques are good tools to create a learning database in the first step and to predict the class label of incoming data in order to assess the development of the disease, i.e., the conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease. Advanced medical imaging such as the volumetric MRI can detect changes in the size of brain regions due to the loss of the brain tissues. Measuring regions that atrophy during the progress of Alzheimer's disease can help neurologists in detecting and staging the disease. In this thesis, we want to diagnose the Alzheimer’s disease from MRI images. We segment brain MRI images to extract the brain chambers. Then, features are extracted from the segmented area. Finally, a classifier is trained to differentiate between normal and AD brain tissues. We discuss an automatic scheme that reads volumetric MRI, extracts the middle slices of the brain region, performs 2-dimensional (volume slices) and volumetric segmentation methods in order to segment gray matter, white matter and cerebrospinal fluid (CSF), generates a feature vector that characterizes this region, creates a database that contains the generated data, and finally classifies the images based on the extracted features. For our results, we have used the MRI data sets from the Alzheimer’s disease Neuroimaging Initiative (ADNI) database1. We assessed the performance of the classifiers by using results from the clinical tests.Master of Science (M.Sc.) in Computational Science

    PET/MR imaging of hypoxic atherosclerotic plaque using 64Cu-ATSM

    Get PDF
    ABSTRACT OF THE DISSERTATION PET/MR Imaging of Hypoxic Atherosclerotic Plaque Using 64Cu-ATSM by Xingyu Nie Doctor of Philosophy in Biomedical Engineering Washington University in St. Louis, 2017 Professor Pamela K. Woodard, Chair Professor Suzanne Lapi, Co-Chair It is important to accurately identify the factors involved in the progression of atherosclerosis because advanced atherosclerotic lesions are prone to rupture, leading to disability or death. Hypoxic areas have been known to be present in human atherosclerotic lesions, and lesion progression is associated with the formation of lipid-loaded macrophages and increased local inflammation which are potential major factors in the formation of vulnerable plaque. This dissertation work represents a comprehensive investigation of non-invasive identification of hypoxic atherosclerotic plaque in animal models and human subjects using the PET hypoxia imaging agent 64Cu-ATSM. We first demonstrated the feasibility of 64Cu-ATSM for the identification of hypoxic atherosclerotic plaque and evaluated the relative effects of diet and genetics on hypoxia progression in atherosclerotic plaque in a genetically-altered mouse model. We then fully validated the feasibility of using 64Cu-ATSM to image the extent of hypoxia in a rabbit model with atherosclerotic-like plaque using a simultaneous PET-MR system. We also proceeded with a pilot clinical trial to determine whether 64Cu-ATSM MR/PET scanning is capable of detecting hypoxic carotid atherosclerosis in human subjects. In order to improve the 64Cu-ATSM PET image quality, we investigated the Siemens HD (high-definition) PET software and 4 partial volume correction methods to correct for partial volume effects. In addition, we incorporated the attenuation effect of the carotid surface coil into the MR attenuation correction _-map to correct for photon attention. In the long term, this imaging strategy has the potential to help identify patients at risk for cardiovascular events, guide therapy, and add to the understanding of plaque biology in human patients

    Model-based Reconstruction of Myocardial Perfusion SPECT and PET Images

    Get PDF
    Myocardial perfusion imaging is an important noninvasive tool in the diagnosis and prognosis of coronary artery disease, the leading cause of death in the United States. Electrocardiographically (ECG) gated acquisition allows combined evaluation of perfusion and left ventricular function within a single study. However, the accuracy of perfusion quantification and functional analysis is reduced by a number of image degrading factors. Particularly, partial volume effects (PVE) resulting from finite spatial resolution cause activity spillover between tissue classes and blur region boundaries. High-resolution anatomical images, such as contrast CT or MRI, can be used for partial volume compensation (PVC), but they are generally not available in clinical practice. The objective of this research is to develop and evaluate a model-based reconstruction method for emission computed tomography applied to myocardial perfusion imaging to improve perfusion quantification and functional assessment without the presence of anatomical images. The idea is to model the left ventricle (LV) using a geometry model and an activity distribution model instead of modeling them using voxels. The geometry model parameterizes the endocardial and epicardial surfaces using a set of rays originating from the long axis of LV. The rays sample the surfaces cylindrically in the basal and mid-ventricular regions and spherically in the apex. The surfaces are obtained by interpolating the corresponding intersection points with the rays using a cubic-spline function. The activity distribution model divides the myocardium into segments similar to those used in standardized myocardial quantitative analysis. The model assumes uniform activity concentrations in the segments as well as the blood pool and body background. The method estimates the parameters of the geometry and activity models instead of the intensities of all voxels, which greatly reduces the number of unknowns to be estimated. The goal of model-based reconstruction method was to estimate the parameters that give the best match between the image generated by the model and the measured data. The input image is contaminated by noise, so the metric for the goodness of fit was a statistical criterion based on the likelihood, i.e., the probability that the image resulted from the given set of model parameters. The image generated by the model includes the effects of resolution and other image degrading factors. A shape constraint was also incorporated into the objective function to regularize the ill-posed reconstruction problem and increase the robustness to perfusion defects and noise. The model parameters were optimized by seeking the maximum of the objective function using a group-wise alternating scheme along with dedicated initial parameter estimation. The hypothesis underlying the work is that resulting geometry parameters produce an accurate segmentation of the LV while the activity parameters are PVE compensated representations of the true activities in the segments. The proposed method integrates prior knowledge about the targeted object and the imaging system into one framework and allows simultaneous LV segmentation and PVC. In the evaluation with simulated myocardial perfusion SPECT images, it improved accuracy and precision in delineating the myocardium in comparison with typical segmentation methods. In addition, it recovered the myocardial activity more effectively compared to deconvolution-based PVC, which also does not require coregistered anatomical images to define regions of interest
    corecore