2,999 research outputs found

    Stochastic Algorithms for White Matter Fiber Tracking and the Inference of Brain Connectivity from MR Diffusion Tensor Data

    Get PDF
    We consider several stochastic algorithms for fiber tracking and compute the connectivity matrix from data obtained by magnetic resonance diffusion tensor imaging of the living human brain

    Spatial Smoothing for Diffusion Tensor Imaging with low Signal to Noise Ratios

    Get PDF
    Though low signal to noise ratio (SNR) experiments in DTI give key information about tracking and anisotropy, e.g. by measurements with very small voxel sizes, due to the complicated impact of thermal noise such experiments are up to now seldom analysed. In this paper Monte Carlo simulations are presented which investigate the random fields of noise for different DTI variables in low SNR situations. Based on this study a strategy for spatial smoothing, which demands essentially uniform noise, is derived. To construct a convenient filter the weights of the nonlinear Aurich chain are adapted to DTI. This edge preserving three dimensional filter is then validated in different variants via a quasi realistic model and is applied to very new data with isotropic voxels of the size 1x1x1 mm3 which correspond to a spatial mean SNR of approximately 3

    A Brief Survey of Recent Edge-Preserving Smoothers

    Get PDF
    We introduce recent and very recent smoothing methods and discuss them in the common framework of `energy functions'. Focus is on the preservation of boundaries, spikes and canyons in presence of noise

    Diffusion Tensor Imaging: on the assessment of data quality - a preliminary bootstrap analysis

    Get PDF
    In the field of nuclear magnetic resonance imaging, diffusion tensor imaging (DTI) has proven an important method for the characterisation of ultrastructural tissue properties. Yet various technical and biological sources of signal uncertainty may prolong into variables derived from diffusion weighted images and thus compromise data validity and reliability. To gain an objective quality rating of real raw data we aimed at implementing the previously described bootstrap methodology (Efron, 1979) and investigating its sensitivity to a selection of extraneous influencing factors. We applied the bootstrap method on real DTI data volumes of six volunteers which were varied by different acquisition conditions, smoothing and artificial noising. In addition a clinical sample group of 46 Multiple Sclerosis patients and 24 healthy controls were investigated. The response variables (RV) extracted from the histogram of the confidence intervals of fractional anisotropy were mean width, peak position and height. The addition of noising showed a significant effect when exceeding about 130% of the original background noise. The application of an edge-preserving smoothing algorithm resulted in an inverse alteration of the RV. Subject motion was also clearly depicted whereas its prevention by use of a vacuum device only resulted in a marginal improvement. We also observed a marked gender-specific effect in a sample of 24 healthy control subjects the causes of which remained unclear. In contrary to this the mere effect of a different signal intensity distribution due to illness (MS) did not alter the response variables

    Intensity Segmentation of the Human Brain with Tissue dependent Homogenization

    Get PDF
    High-precision segmentation of the human cerebral cortex based on T1-weighted MRI is still a challenging task. When opting to use an intensity based approach, careful data processing is mandatory to overcome inaccuracies. They are caused by noise, partial volume effects and systematic signal intensity variations imposed by limited homogeneity of the acquisition hardware. We propose an intensity segmentation which is free from any shape prior. It uses for the first time alternatively grey (GM) or white matter (WM) based homogenization. This new tissue dependency was introduced as the analysis of 60 high resolution MRI datasets revealed appreciable differences in the axial bias field corrections, depending if they are based on GM or WM. Homogenization starts with axial bias correction, a spatially irregular distortion correction follows and finally a noise reduction is applied. The construction of the axial bias correction is based on partitions of a depth histogram. The irregular bias is modelled by Moody Darken radial basis functions. Noise is eliminated by nonlinear edge preserving and homogenizing filters. A critical point is the estimation of the training set for the irregular bias correction in the GM approach. Because of intensity edges between CSF (cerebro spinal fluid surrounding the brain and within the ventricles), GM and WM this estimate shows an acceptable stability. By this supervised approach a high flexibility and precision for the segmentation of normal and pathologic brains is gained. The precision of this approach is shown using the Montreal brain phantom. Real data applications exemplify the advantage of the GM based approach, compared to the usual WM homogenization, allowing improved cortex segmentation

    DTI denoising for data with low signal to noise ratios

    Get PDF
    Low signal to noise ratio (SNR) experiments in diffusion tensor imaging (DTI) give key information about tracking and anisotropy, e. g., by measurements with small voxel sizes or with high b values. However, due to the complicated and dominating impact of thermal noise such data are still seldom analysed. In this paper Monte Carlo simulations are presented which investigate the distributions of noise for different DTI variables in low SNR situations. Based on this study a strategy for the application of spatial smoothing is derived. Optimal prerequisites for spatial filters are unbiased, bell shaped distributions with uniform variance, but, only few variables have a statistics close to that. To construct a convenient filter a chain of nonlinear Gaussian filters is adapted to peculiarities of DTI and a bias correction is introduced. This edge preserving three dimensional filter is then validated via a quasi realistic model. Further, it is shown that for small sample sizes the filter is as effective as a maximum likelihood estimator and produces reliable results down to a local SNR of approximately 1. The filter is finally applied to very recent data with isotropic voxels of the size 1Ɨ1Ɨ1mm^3 which corresponds to a spatially mean SNR of 2.5. This application demonstrates the statistical robustness of the filter method. Though the Rician noise model is only approximately realized in the data, the gain of information by spatial smoothing is considerable

    On the Use of Local RBF Networks to Approximate Multivalued Functions and Relations

    Get PDF
    A connectionist model made up of a combination of RBF networks is proposed; the model decomposes multivalued dependencies into local single valued functions; theory and applications are presented

    Effect of temperature and time after collection on buck sperm quality

    Get PDF
    Background: Different parameters are assessed as part of the semen analysis but a standard protocol for evaluation of goat semen is still missing. The aim of this study was to analyse two different factors affecting buck sperm quality in the post-collection period prior to adding the extender. Here we examined the effects of two handling temperatures (20 Ā°C, 37 Ā°C) and various examination time points (3-30 min) after semen collection. Results: Examination time point had a significant influence on raw sperm viability (p 0.05), motility (p > 0.05), with the exception of fast moving sperm (p = 0.04), or on semen pH (p > 0.05). Conclusion: Examination time point was identified as factor strongly influencing raw peacock buck semen after collection. Raw goat semen can tolerate room temperatures for at least 10 min without impacting overall semen quality. In order to obtain comparable results, semen samples should always be examined within 10 min after collection
    • ā€¦
    corecore