281 research outputs found

    Partial-volume Bayesian classification of material mixtures in MR volume data using voxel histograms

    Get PDF
    The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, ρ(x), from the samples and then looking at the distribution of values that ρ(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent

    Classification of Material Mixtures in Volume Data for Visualization and Modeling

    Get PDF
    Material classification is a key stop in creating computer graphics models and images from volume data, We present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with Magnetic Resonance Imaging (NMI) or Computed Tomography (CT). The algorithm assumes that voxels can contain more than one material, e.g. both muscle and fat; we wish to compute the relative proportion of each material in the voxels. Other classification methods have utilized Gaussian probability density functions to model the distribution of values within a dataset. These Gaussian basis functions work well for voxels with unmixed materials, but do not work well where the materials are mixed together. We extend this approach by deriving non-Gaussian "mixture" basis functions. We treat a voxel as a volume, not as a single point. We use the distribution of values within each voxel-sized volume to identify materials within the voxel using a probabilistic approach. The technique reduces the classification artifacts that occur along boundaries between materials. The technique is useful for making higher quality geometric models and renderings from volume data, and has the potential to make more accurate volume measurements. It also classifies noisy, low-resolution data well

    Pure phase-encoded MRI and classification of solids

    Get PDF
    Here, the authors combine a pure phase-encoded magnetic resonance imaging (MRI) method with a new tissue-classification technique to make geometric models of a human tooth. They demonstrate the feasibility of three-dimensional imaging of solids using a conventional 11.7-T NMR spectrometer. In solid-state imaging, confounding line-broadening effects are typically eliminated using coherent averaging methods. Instead, the authors circumvent them by detecting the proton signal at a fixed phase-encode time following the radio-frequency excitation. By a judicious choice of the phase-encode time in the MRI protocol, the authors differentiate enamel and dentine sufficiently to successfully apply a new classification algorithm. This tissue-classification algorithm identifies the distribution of different material types, such as enamel and dentine, in volumetric data. In this algorithm, the authors treat a voxel as a volume, not as a single point, and assume that each voxel may contain more than one material. They use the distribution of MR image intensities within each voxel-sized volume to estimate the relative proportion of each material using a probabilistic approach. This combined approach, involving MRI and data classification, is directly applicable to bone imaging and hard-tissue contrast-based modeling of biological solids

    Probabilistic partial volume modelling of biomedical tomographic image data

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Facilitating the design of multidimensional and local transfer functions for volume visualization

    Get PDF
    The importance of volume visualization is increasing since the sizes of the datasets that need to be inspected grow with every new version of medical scanners (e.g., CT and MR). Direct volume rendering is a 3D visualization technique that has, in many cases, clear benefits over 2D views. It is able to show 3D information, facilitating mental reconstruction of the 3D shape of objects and their spatial relation. The complexity of the settings required in order to generate a 3D rendering is, however, one of the main reasons for this technique not being used more widely in practice. Transfer functions play an important role in the appearance of volume rendered images by determining the optical properties of each piece of the data. The transfer function determines what will be seen and how. The goal of the project on which this PhD thesis reports was to develop and investigate new approaches that would facilitate the setting of transfer functions. As shown in the state of the art overview in Chapter 2, there are two main aspects that influence the effectiveness of a TF: the choice of the TF domain and the process of defining the shape of the TF. The choice of a TF domain, i.e., the choice of the data properties used, directly determines which aspects of the volume data can be visualized. In many approaches, special attention is given to TF domains that would enable an easier selection and visualization of boundaries between materials. The boundaries are an important aspect of the volume data since they reveal the shapes and sizes of objects. Our research in improving the TF definition focused on introducing new user interaction methods and automation techniques that shield the user from the complex process of manually defining the shape and color properties of TFs. Our research dealt with both the TF domain and the TF definition since they are closely related. A suitable TF domain cannot only greatly improve the manual definition, but also, more importantly, increases the possibilities of using automated techniques. Chapter 3 presents a new TF domain. We have used the LH space and the associated LH histogram for TFs based on material boundaries. We showed that the LH space reduces the ambiguity when selecting boundaries compared to the commonly used space of the data value and gradient magnitude. Fur- thermore, boundaries appear as blobs in the LH histogram that make them easier to select. Its compactness and easier selectivity of the boundaries makes the LH histogram suitable for the introduction of clustering-based automation. The mirrored extension of the LH space differentiates between both sides of the boundary. The mirrored LH histogram shows interesting properties of this space, allowing the selection of all boundaries belonging to one material in an easy way. We have also shown that segmentation techniques, such as region growing methods, can benefit from the properties of LH space. Standard cost functions based on the data value and/or the gradient magnitude may experience problems at the boundaries due to the partial volume effect. However, our cost function that is based on the LH space is, however, capable of handling the region growing of boundaries better. Chapter 4 presents an interaction framework for the TF definition based on hierarchical clustering of material boundaries. Our framework aims at an easy combination of various similarity measures that reflect requirements of the user. One of the main benefits of the framework is the absence of similarity-weighting coefficients that are usually hard to define. Further, the framework enables the user to visualize objects that may exist at different levels of the hierarchy. We also introduced two similarity measures that illustrate the functionality of the framework. The main contribution is the first similarity measure that takes advantage of properties of the LH histogram from Chapter 3. We assumed that the shapes of the peaks in the LH histogram can guide the grouping of clusters. The second similarity measure is based on the spatial relationships of clusters. In Chapter 5, we presented part of our research that focused on one of the main issues encountered in the TFs in general. Standard TFs, as they are applied everywhere in the volume in the same way, become difficult to use when the data properties (measurements) of the same material vary over the volume, for example, due to the acquisition inaccuracies. We address this problem by introducing the concept and framework of local transfer functions (LTFs). Local transfer functions are based on using locally applicable TFs in cases where it might be difficult or impossible to define a globally applicable TF. We discussed a number of reasons that hamper the global TF and illustrated how the LTFs may help to alleviate these problems. We have also discussed how multiple TFs can be combined and automatically adapted. One of our contributions is the use of the similarity of local histograms and their correlation for the combination and adaptation of LTFs

    Cortical thickness measurement from magnetic resonance images using partial volume estimation

    Get PDF
    Measurement of the cortical thickness from 3D Magnetic Resonance Imaging (MRI) can aid diagnosis and longitudinal studies of a wide range of neurodegenerative diseases. We estimate the cortical thickness using a Laplacian approach whereby equipotentials analogous to layers of tissue are computed. The thickness is then obtained using an Eulerian approach where partial differential equations (PDE) are solved, avoiding the explicit tracing of trajectories along the streamlines gradient. This method has the advantage of being relatively fast and insure unique correspondence points between the inner and outer boundaries of the cortex. The original method is challenged when the thickness of the cortex is of the same order of magnitude as the image resolution since partial volume (PV) effect is not taken into account at the gray matter (GM) boundaries. We propose a novel way to take into account PV which improves substantially accuracy and robustness. We model PV by computing a mixture of pure Gaussian probability distributions and use this estimate to initialize the cortical thickness estimation. On synthetic phantoms experiments, the errors were divided by three while reproducibility was improved when the same patients was scanned three consecutive times

    Partial Volume Segmentation of Brain MRI Scans of any Resolution and Contrast

    Full text link
    Partial voluming (PV) is arguably the last crucial unsolved problem in Bayesian segmentation of brain MRI with probabilistic atlases. PV occurs when voxels contain multiple tissue classes, giving rise to image intensities that may not be representative of any one of the underlying classes. PV is particularly problematic for segmentation when there is a large resolution gap between the atlas and the test scan, e.g., when segmenting clinical scans with thick slices, or when using a high-resolution atlas. In this work, we present PV-SynthSeg, a convolutional neural network (CNN) that tackles this problem by directly learning a mapping between (possibly multi-modal) low resolution (LR) scans and underlying high resolution (HR) segmentations. PV-SynthSeg simulates LR images from HR label maps with a generative model of PV, and can be trained to segment scans of any desired target contrast and resolution, even for previously unseen modalities where neither images nor segmentations are available at training. PV-SynthSeg does not require any preprocessing, and runs in seconds. We demonstrate the accuracy and flexibility of the method with extensive experiments on three datasets and 2,680 scans. The code is available at https://github.com/BBillot/SynthSeg.Comment: accepted for MICCAI 202

    A new anisotropic diffusion method, application to partial volume effect reduction

    Get PDF
    The partial volume effect is a significant limitation in medical imaging that results in blurring when the boundary between two structures of interest falls in the middle of a voxel. A new anisotropic diffusion method allows one to create interpolated 3D images corrected for partial volume, without enhancement of noise. After a zero-order interpolation, we apply a modified version of the anisotropic diffusion approach, wherein the diffusion coefficient becomes negative for high gradient values. As a result, the new scheme restores edges between regions that have been blurred by partial voluming, but it acts as normal anisotropic diffusion in flat regions, where it reduces noise. We add constraints to stabilize the method and model partial volume; i.e., the sum of neighboring voxels must equal the signal in the original low resolution voxel and the signal in a voxel is kept within its neighbor's limits. The method performed well on a variety of synthetic images and MRI scans. No noticeable artifact was induced by interpolation with partial volume correction, and noise was much reduced in homogeneous regions. We validated the method using the BrainWeb project database. Partial volume effect was simulated and restored brain volumes compared to the original ones. Errors due to partial volume effect were reduced by 28% and 35% for the 5% and 0% noise cases, respectively. The method was applied to in vivo "thick" MRI carotid artery images for atherosclerosis detection. There was a remarkable increase in the delineation of the lumen of the carotid artery

    A Physical Model for Microstructural Characterization and Segmentation of 3D Tomography Data

    Full text link
    We present a novel method for characterizing the microstructure of a material from volumetric datasets such as 3D image data from computed tomography (CT). The method is based on a new statistical model for the distribution of voxel intensities and gradient magnitudes, incorporating prior knowledge about the physical nature of the imaging process. It allows for direct quantification of parameters of the imaged sample like volume fractions, interface areas and material density, and parameters related to the imaging process like image resolution and noise levels. Existing methods for characterization from 3D images often require segmentation of the data, a procedure where each voxel is labeled according to the best guess of which material it represents. Through our approach, the segmentation step is circumvented so that errors and computational costs related to this part of the image processing pipeline are avoided. Instead, the material parameters are quantified through their known relation to parameters of our model which is fitted directly to the raw, unsegmented data. We present an automated model fitting procedure that gives reproducible results without human bias and enables automatic analysis of large sets of tomograms. For more complex structure analysis questions, a segmentation is still beneficial. We show that our model can be used as input to existing probabilistic methods, providing a segmentation that is based on the physics of the imaged sample. Because our model accounts for mixed-material voxels stemming from blurring inherent to the imaging technique, we reduce the errors that other methods can create at interfaces between materials.Comment: Manuscript accepted for publication in Materials Characterizatio
    corecore