49,428 research outputs found

    Test-retest reliability of structural brain networks from diffusion MRI

    Get PDF
    Structural brain networks constructed from diffusion MRI (dMRI) and tractography have been demonstrated in healthy volunteers and more recently in various disorders affecting brain connectivity. However, few studies have addressed the reproducibility of the resulting networks. We measured the test–retest properties of such networks by varying several factors affecting network construction using ten healthy volunteers who underwent a dMRI protocol at 1.5 T on two separate occasions. Each T1-weighted brain was parcellated into 84 regions-of-interest and network connections were identified using dMRI and two alternative tractography algorithms, two alternative seeding strategies, a white matter waypoint constraint and three alternative network weightings. In each case, four common graph-theoretic measures were obtained. Network properties were assessed both node-wise and per network in terms of the intraclass correlation coefficient (ICC) and by comparing within- and between-subject differences. Our findings suggest that test–retest performance was improved when: 1) seeding from white matter, rather than grey; and 2) using probabilistic tractography with a two-fibre model and sufficient streamlines, rather than deterministic tensor tractography. In terms of network weighting, a measure of streamline density produced better test–retest performance than tract-averaged diffusion anisotropy, although it remains unclear which is a more accurate representation of the underlying connectivity. For the best performing configuration, the global within-subject differences were between 3.2% and 11.9% with ICCs between 0.62 and 0.76. The mean nodal within-subject differences were between 5.2% and 24.2% with mean ICCs between 0.46 and 0.62. For 83.3% (70/84) of nodes, the within-subject differences were smaller than between-subject differences. Overall, these findings suggest that whilst current techniques produce networks capable of characterising the genuine between-subject differences in connectivity, future work must be undertaken to improve network reliability

    Thermal error modelling of machine tools based on ANFIS with fuzzy c-means clustering using a thermal imaging camera

    Get PDF
    Thermal errors are often quoted as being the largest contributor to CNC machine tool errors, but they can be effectively reduced using error compensation. The performance of a thermal error compensation system depends on the accuracy and robustness of the thermal error model and the quality of the inputs to the model. The location of temperature measurement must provide a representative measurement of the change in temperature that will affect the machine structure. The number of sensors and their locations are not always intuitive and the time required to identify the optimal locations is often prohibitive, resulting in compromise and poor results. In this paper, a new intelligent compensation system for reducing thermal errors of machine tools using data obtained from a thermal imaging camera is introduced. Different groups of key temperature points were identified from thermal images using a novel schema based on a Grey model GM (0, N) and Fuzzy c-means (FCM) clustering method. An Adaptive Neuro-Fuzzy Inference System with Fuzzy c-means clustering (FCM-ANFIS) was employed to design the thermal prediction model. In order to optimise the approach, a parametric study was carried out by changing the number of inputs and number of membership functions to the FCM-ANFIS model, and comparing the relative robustness of the designs. According to the results, the FCM-ANFIS model with four inputs and six membership functions achieves the best performance in terms of the accuracy of its predictive ability. The residual value of the model is smaller than ± 2 μm, which represents a 95% reduction in the thermally-induced error on the machine. Finally, the proposed method is shown to compare favourably against an Artificial Neural Network (ANN) model

    Deep grey matter volumetry as a function of age using a semi-automatic qMRI algorithm

    Full text link
    Quantitative Magnetic Resonance has become more and more accepted for clinical trial in many fields. This technique not only can generate qMRI maps (such as T1/T2/PD) but also can be used for further postprocessing including segmentation of brain and characterization of different brain tissue. Another main application of qMRI is to measure the volume of the brain tissue such as the deep Grey Matter (dGM). The deep grey matter serves as the brain's "relay station" which receives and sends inputs between the cortical brain regions. An abnormal volume of the dGM is associated with certain diseases such as Fetal Alcohol Spectrum Disorders (FASD). The goal of this study is to investigate the effect of age on the volume change of the dGM using qMRI. Thirteen patients (mean age= 26.7 years old and age range from 0.5 to 72.5 years old) underwent imaging at a 1.5T MR scanner. Axial images of the entire brain were acquired with the mixed Turbo Spin-echo (mixed -TSE) pulse sequence. The acquired mixed-TSE images were transferred in DICOM format image for further analysis using the MathCAD 2001i software (Mathsoft, Cambridge, MA). Quantitative T1 and T2-weighted MR images were generated. The image data sets were further segmented using the dual-space clustering segmentation. Then volume of the dGM matter was calculated using a pixel counting algorithm and the spectrum of the T1/T2/PD distribution were also generated. Afterwards, the dGM volume of each patient was calculated and plotted on scatter plot. The mean volume of the dGM, standard deviation, and range were also calculated. The result shows that volume of the dGM is 47.5 ±5.3ml (N=13) which is consistent with former studies. The polynomial tendency line generated based on scatter plot shows that the volume of the dGM gradually increases with age at early age and reaches the maximum volume around the age of 20, and then it starts to decrease gradually in adulthood and drops much faster in elderly age. This result may help scientists to understand more about the aging of the brain and it can also be used to compare with the results from former studies using different techniques

    The Tiling Algorithm for the 6dF Galaxy Survey

    Full text link
    The Six Degree Field Galaxy Survey (6dFGS) is a spectroscopic survey of the southern sky, which aims to provide positions and velocities of galaxies in the nearby Universe. We present here the adaptive tiling algorithm developed to place 6dFGS fields on the sky, and allocate targets to those fields. Optimal solutions to survey field placement are generally extremely difficult to find, especially in this era of large-scale galaxy surveys, as the space of available solutions is vast (2N dimensional) and false optimal solutions abound. The 6dFGS algorithm utilises the Metropolis (simulated annealing) method to overcome this problem. By design the algorithm gives uniform completeness independent of local density, so as to result in a highly complete and uniform observed sample. The adaptive tiling achieves a sampling rate of approximately 95%, a variation in the sampling uniformity of less than 5%, and an efficiency in terms of used fibres per field of greater than 90%. We have tested whether the tiling algorithm systematically biases the large-scale structure in the survey by studying the two-point correlation function of mock 6dF volumes. Our analysis shows that the constraints on fibre proximity with 6dF lead to under-estimating galaxy clustering on small scales (< 1 Mpc) by up to ~20%, but that the tiling introduces no significant sampling bias at larger scales.Comment: 11 pages, 7 figures. Full resolution version of the paper available from http://www.mso.anu.edu.au/6dFGS/ . Abridged version of abstract belo

    Pragmatic Ontology Evolution: Reconciling User Requirements and Application Performance

    Get PDF
    Increasingly, organizations are adopting ontologies to describe their large catalogues of items. These ontologies need to evolve regularly in response to changes in the domain and the emergence of new requirements. An important step of this process is the selection of candidate concepts to include in the new version of the ontology. This operation needs to take into account a variety of factors and in particular reconcile user requirements and application performance. Current ontology evolution methods focus either on ranking concepts according to their relevance or on preserving compatibility with existing applications. However, they do not take in consideration the impact of the ontology evolution process on the performance of computational tasks – e.g., in this work we focus on instance tagging, similarity computation, generation of recommendations, and data clustering. In this paper, we propose the Pragmatic Ontology Evolution (POE) framework, a novel approach for selecting from a group of candidates a set of concepts able to produce a new version of a given ontology that i) is consistent with the a set of user requirements (e.g., max number of concepts in the ontology), ii) is parametrised with respect to a number of dimensions (e.g., topological considerations), and iii) effectively supports relevant computational tasks. Our approach also supports users in navigating the space of possible solutions by showing how certain choices, such as limiting the number of concepts or privileging trendy concepts rather than historical ones, would reflect on the application performance. An evaluation of POE on the real-world scenario of the evolving Springer Nature taxonomy for editorial classification yielded excellent results, demonstrating a significant improvement over alternative approaches
    corecore