3,665 research outputs found

    Deep grey matter volumetry as a function of age using a semi-automatic qMRI algorithm

    Full text link
    Quantitative Magnetic Resonance has become more and more accepted for clinical trial in many fields. This technique not only can generate qMRI maps (such as T1/T2/PD) but also can be used for further postprocessing including segmentation of brain and characterization of different brain tissue. Another main application of qMRI is to measure the volume of the brain tissue such as the deep Grey Matter (dGM). The deep grey matter serves as the brain's "relay station" which receives and sends inputs between the cortical brain regions. An abnormal volume of the dGM is associated with certain diseases such as Fetal Alcohol Spectrum Disorders (FASD). The goal of this study is to investigate the effect of age on the volume change of the dGM using qMRI. Thirteen patients (mean age= 26.7 years old and age range from 0.5 to 72.5 years old) underwent imaging at a 1.5T MR scanner. Axial images of the entire brain were acquired with the mixed Turbo Spin-echo (mixed -TSE) pulse sequence. The acquired mixed-TSE images were transferred in DICOM format image for further analysis using the MathCAD 2001i software (Mathsoft, Cambridge, MA). Quantitative T1 and T2-weighted MR images were generated. The image data sets were further segmented using the dual-space clustering segmentation. Then volume of the dGM matter was calculated using a pixel counting algorithm and the spectrum of the T1/T2/PD distribution were also generated. Afterwards, the dGM volume of each patient was calculated and plotted on scatter plot. The mean volume of the dGM, standard deviation, and range were also calculated. The result shows that volume of the dGM is 47.5 ±5.3ml (N=13) which is consistent with former studies. The polynomial tendency line generated based on scatter plot shows that the volume of the dGM gradually increases with age at early age and reaches the maximum volume around the age of 20, and then it starts to decrease gradually in adulthood and drops much faster in elderly age. This result may help scientists to understand more about the aging of the brain and it can also be used to compare with the results from former studies using different techniques

    Template-Cut: A Pattern-Based Segmentation Paradigm

    Get PDF
    We present a scale-invariant, template-based segmentation paradigm that sets up a graph and performs a graph cut to separate an object from the background. Typically graph-based schemes distribute the nodes of the graph uniformly and equidistantly on the image, and use a regularizer to bias the cut towards a particular shape. The strategy of uniform and equidistant nodes does not allow the cut to prefer more complex structures, especially when areas of the object are indistinguishable from the background. We propose a solution by introducing the concept of a "template shape" of the target object in which the nodes are sampled non-uniformly and non-equidistantly on the image. We evaluate it on 2D-images where the object's textures and backgrounds are similar, and large areas of the object have the same gray level appearance as the background. We also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning purposes.Comment: 8 pages, 6 figures, 3 tables, 6 equations, 51 reference

    Focal Spot, Spring/Summer 1985

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1040/thumbnail.jp

    Partial-volume Bayesian classification of material mixtures in MR volume data using voxel histograms

    Get PDF
    The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, ρ(x), from the samples and then looking at the distribution of values that ρ(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent
    corecore