192 research outputs found

    Intensity Segmentation of the Human Brain with Tissue dependent Homogenization

    Get PDF
    High-precision segmentation of the human cerebral cortex based on T1-weighted MRI is still a challenging task. When opting to use an intensity based approach, careful data processing is mandatory to overcome inaccuracies. They are caused by noise, partial volume effects and systematic signal intensity variations imposed by limited homogeneity of the acquisition hardware. We propose an intensity segmentation which is free from any shape prior. It uses for the first time alternatively grey (GM) or white matter (WM) based homogenization. This new tissue dependency was introduced as the analysis of 60 high resolution MRI datasets revealed appreciable differences in the axial bias field corrections, depending if they are based on GM or WM. Homogenization starts with axial bias correction, a spatially irregular distortion correction follows and finally a noise reduction is applied. The construction of the axial bias correction is based on partitions of a depth histogram. The irregular bias is modelled by Moody Darken radial basis functions. Noise is eliminated by nonlinear edge preserving and homogenizing filters. A critical point is the estimation of the training set for the irregular bias correction in the GM approach. Because of intensity edges between CSF (cerebro spinal fluid surrounding the brain and within the ventricles), GM and WM this estimate shows an acceptable stability. By this supervised approach a high flexibility and precision for the segmentation of normal and pathologic brains is gained. The precision of this approach is shown using the Montreal brain phantom. Real data applications exemplify the advantage of the GM based approach, compared to the usual WM homogenization, allowing improved cortex segmentation

    Diffusion Tensor Imaging: on the assessment of data quality - a preliminary bootstrap analysis

    Get PDF
    In the field of nuclear magnetic resonance imaging, diffusion tensor imaging (DTI) has proven an important method for the characterisation of ultrastructural tissue properties. Yet various technical and biological sources of signal uncertainty may prolong into variables derived from diffusion weighted images and thus compromise data validity and reliability. To gain an objective quality rating of real raw data we aimed at implementing the previously described bootstrap methodology (Efron, 1979) and investigating its sensitivity to a selection of extraneous influencing factors. We applied the bootstrap method on real DTI data volumes of six volunteers which were varied by different acquisition conditions, smoothing and artificial noising. In addition a clinical sample group of 46 Multiple Sclerosis patients and 24 healthy controls were investigated. The response variables (RV) extracted from the histogram of the confidence intervals of fractional anisotropy were mean width, peak position and height. The addition of noising showed a significant effect when exceeding about 130% of the original background noise. The application of an edge-preserving smoothing algorithm resulted in an inverse alteration of the RV. Subject motion was also clearly depicted whereas its prevention by use of a vacuum device only resulted in a marginal improvement. We also observed a marked gender-specific effect in a sample of 24 healthy control subjects the causes of which remained unclear. In contrary to this the mere effect of a different signal intensity distribution due to illness (MS) did not alter the response variables

    Dynamic models in fMRI

    Get PDF
    Most statistical methods for assessing activated voxels in fMRI experiments are based on correlation or regression analysis. In this context the main assumptions are that the baseline can be described by a few known basis-functions or variables and that the effect of the stimulus, i.e. the activation, stays constant over time. As these assumptions are in many cases neither necessary nor correct, a new dynamic approach that does not depend on those suppositions will be presented. This allows for simultaneous nonparametric estimation of the baseline as well as the time-varying effect of stimulation. This method of estimating the stimulus related areas of the brain furthermore provides the possibility of an analysis of the temporal and spatial development of the activation within an fMRI-experiment

    Vergleichende Volumetrie von Ultraschall- und Magnetresonanztomografie- DatensÀtze mit einem Hybridphantom

    Get PDF
    In der vorliegenden Studie wurde die Genauigkeit von Ultraschall-und MRT-Bildern untersucht. Dazu wurde ein Phantom konstruiert, dass aus drei Kompartimenten bestand: Einem grossen Komparitment, welches die Gehirnmasse simulierte, und zwei kleinen Kompartimenten, die die beiden Raumforderungen "HĂ€matom" und "Ventrikeltumor" darstellten. Von jeder dieser beiden Raumforderungen wurden acht Modelle mit verschiedenen, bekannten Volumina hergestellt. UltraschalldatensĂ€tze wurden sagittal (alle HĂ€matommodelle und 4 Tumormodelle) und koronar (4 Tumormodelle) gewonnen. MRT-DatensĂ€tze (T2 gewichtete koronare Schichtaufnahmen wurden rekrutiert. Das Volumen jeder Raumforderung beider Bildgebungsmethoden wurde mit manueller Segmentierung von zwei unabhĂ€ngigen Beobachtern achtmal ermittelt. In der Studie fanden sich bei beiden Objektklassen, Objekten und beiden Beobachtern lediglich minimale, wenn auch statistisch signifikante Abweichungen (p>0,05) von den tatsĂ€chlichen Volumina: Die Abweichung betrug fĂŒr die BildmodalitĂ€t Ultraschall fĂŒr die Objektklasse "HĂ€matom" 1,99+/- 1,44% und fĂŒr die Objektklasse "Tumor" 2,21+/-1,70%. FĂŒr die BildmodalitĂ€t MRT betrug sie fĂŒr die Objektklasse "HĂ€matom" 2,9+/- 0,0143% und fĂŒr die Objektklasse "Tumor" 1,48+/-0,0135%. Die Fehler ergaben sich unabhĂ€ngig von der BildmodalitĂ€t, der Objektklasse, des Volumens und der Beobachter. Es konnte im Allgemeinen aber eine divergierende Tendenz zwischen den beiden BildmodalitĂ€ten beobachtet werden, wobei bei der BildmodalitĂ€t Ultrschall zur UnterschĂ€tzung und bei der BildmodalitĂ€t MRT zur ÜberschĂ€tzung des Volumens tendiert wurde. Mit dieser Arbeit konnte gezeigt werden, dass die sonografische Volumenermittlung von GehirnlĂ€sionen der MRT-Bildgebung ebenbĂŒrtig ist. Somit könnte die Sonografie bei dieser Fragestellung eine kostengĂŒnstige und intraoperativ einsetzbare Alternative zur Magnetresonanztomografie darstellen

    Is the Brain Cortex a Fractal?

    Get PDF
    The question is analysed if the human cerebral cortex is self similar in a statistical sense, a property which is usually referred to as being a fractal. The presented analysis includes all spatial scales from the brain size to the ultimate image resolution. Results obtained in two healthy volunteers show that the self similarity does take place down to the spatial scale of 2.5 mm. The obtained fractal dimensions read D=2.73±.05 and D=2.67±.05 correspondingly, which is in good agreement with previously reported results. The new calculational method is volumetric and is based on the fast Fourier Transform of segmented three dimensional high resolved magnetic resonance images. Engagement of FFT enables a simple interpretation of the results and achieves a high performance, which is necessary to analyse the entire cortex

    A generic ensemble based deep convolutional neural network for semi-supervised medical image segmentation

    Full text link
    Deep learning based image segmentation has achieved the state-of-the-art performance in many medical applications such as lesion quantification, organ detection, etc. However, most of the methods rely on supervised learning, which require a large set of high-quality labeled data. Data annotation is generally an extremely time-consuming process. To address this problem, we propose a generic semi-supervised learning framework for image segmentation based on a deep convolutional neural network (DCNN). An encoder-decoder based DCNN is initially trained using a few annotated training samples. This initially trained model is then copied into sub-models and improved iteratively using random subsets of unlabeled data with pseudo labels generated from models trained in the previous iteration. The number of sub-models is gradually decreased to one in the final iteration. We evaluate the proposed method on a public grand-challenge dataset for skin lesion segmentation. Our method is able to significantly improve beyond fully supervised model learning by incorporating unlabeled data.Comment: Accepted for publication at IEEE International Symposium on Biomedical Imaging (ISBI) 202

    Segmentierung des Gehirns auf der Basis von MR-Daten

    Get PDF
    Es wird ein Segmentierungsverfahren vorgestellt, das bei T1-gewichteten MR Aufnahmen Liquor, Cortex und weisse Materie trennt. Das Verfahren korrigiert in mehreren Schritten aufnahmetechnisch bedingte Artefakte und bestimmt die Substanzen durch 2 globale Schwellen. Das Verfahren erfordert an mehreren Stellen eine interaktive Justierung von Parametern und ist entsprechend flexibel

    Graph Theoretic Analysis of Brain Connectomics in Multiple Sclerosis: Reliability and Relationship to Cognition

    Get PDF
    Research suggests that disruption of brain networks might explain cognitive deficits in multiple sclerosis (MS). The reliability and effectiveness of graph-theoretic network metrics as measures of cognitive performance were tested in 37 people with MS and 23 controls. Specifically, relationships to cognitive performance (linear regression against the Paced Auditory Serial Addition Test [PASAT-3], Symbol Digit Modalities Test [SDMT] and Attention Network Test [ANT]) and one-month reliability (using the intra-class correlation coefficient [ICC]) of network metrics were measured using both resting-state functional and diffusion MRI data. Cognitive impairment was directly related to measures of brain network segregation and inversely related to network integration (prediction of PASAT-3 by small-worldness, modularity, characteristic path length, R2=0.55; prediction of SDMT by small-worldness, global efficiency and characteristic path length, R2=0.60). Reliability of the measures over one month in a subset of 9 participants was mostly rated as good (ICC>0.6) for both controls and MS patients in both functional and diffusion data but was highly dependent on the chosen parcellation and graph density, with the 0.2-0.5 density range being the most reliable. This suggests that disrupted network organisation predicts cognitive impairment in MS and its measurement is reliable over a 1-month period. These new findings support the hypothesis of network disruption as a major determinant of cognitive deficits in MS and the future possibility of the application of derived metrics as surrogate outcomes in trials of therapies for cognitive impairment

    Coordinate based random effect size meta-analysis of neuroimaging studies

    Get PDF
    Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely
    • 

    corecore