183 research outputs found

    Comparison of quality control methods for automated diffusion tensor imaging analysis pipelines

    Get PDF
    © 2019 Haddad et al. The processing of brain diffusion tensor imaging (DTI) data for large cohort studies requires fully automatic pipelines to perform quality control (QC) and artifact/outlier removal procedures on the raw DTI data prior to calculation of diffusion parameters. In this study, three automatic DTI processing pipelines, each complying with the general ENIGMA framework, were designed by uniquely combining multiple image processing software tools. Different QC procedures based on the RESTORE algorithm, the DTIPrep protocol, and a combination of both methods were compared using simulated ground truth and artifact containing DTI datasets modeling eddy current induced distortions, various levels of motion artifacts, and thermal noise. Variability was also examined in 20 DTI datasets acquired in subjects with vascular cognitive impairment (VCI) from the multi-site Ontario Neurodegenerative Disease Research Initiative (ONDRI). The mean fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) were calculated in global brain grey matter (GM) and white matter (WM) regions. For the simulated DTI datasets, the measure used to evaluate the performance of the pipelines was the normalized difference between the mean DTI metrics measured in GM and WM regions and the corresponding ground truth DTI value. The performance of the proposed pipelines was very similar, particularly in FA measurements. However, the pipeline based on the RESTORE algorithm was the most accurate when analyzing the artifact containing DTI datasets. The pipeline that combined the DTIPrep protocol and the RESTORE algorithm produced the lowest standard deviation in FA measurements in normal appearing WM across subjects. We concluded that this pipeline was the most robust and is preferred for automated analysis of multisite brain DTI data

    Blocking endothelial apoptosis revascularises the retina in a model of ischemic retinopathy

    Get PDF
    Aberrant, neovascular retinal blood vessel growth is a vision-threatening complication in ischemic retinal diseases. It is driven by retinal hypoxia frequently caused by capillary nonperfusion and endothelial cell (EC) loss. We investigated the role of EC apoptosis in this process using a mouse model of ischemic retinopathy, in which vessel closure and EC apoptosis cause capillary regression and retinal ischemia followed by neovascularization. Protecting ECs from apoptosis in this model did not prevent capillary closure or retinal ischemia. Nonetheless, it prevented the clearance of ECs from closed capillaries, delaying vessel regression and allowing ECs to persist in clusters throughout the ischemic zone. In response to hypoxia, these preserved ECs underwent a vessel sprouting response and rapidly reassembled into a functional vascular network. This alleviated retinal hypoxia, preventing subsequent pathogenic neovascularization. Vessel reassembly was not limited by VEGFA neutralization, suggesting it was not dependent on the excess VEGFA produced by the ischemic retina. Neutralization of ANG2 did not prevent vessel reassembly, but did impair subsequent angiogenic expansion of the reassembled vessels. Blockade of EC apoptosis may promote ischemic tissue revascularization by preserving ECs within ischemic tissue that retain the capacity to reassemble a functional network and rapidly restore blood supply

    Multisite Comparison of MRI Defacing Software Across Multiple Cohorts

    Get PDF
    With improvements to both scan quality and facial recognition software, there is an increased risk of participants being identified by a 3D render of their structural neuroimaging scans, even when all other personal information has been removed. To prevent this, facial features should be removed before data are shared or openly released, but while there are several publicly available software algorithms to do this, there has been no comprehensive review of their accuracy within the general population. To address this, we tested multiple algorithms on 300 scans from three neuroscience research projects, funded in part by the Ontario Brain Institute, to cover a wide range of ages (3–85 years) and multiple patient cohorts. While skull stripping is more thorough at removing identifiable features, we focused mainly on defacing software, as skull stripping also removes potentially useful information, which may be required for future analyses. We tested six publicly available algorithms (afni_refacer, deepdefacer, mri_deface, mridefacer, pydeface, quickshear), with one skull stripper (FreeSurfer) included for comparison. Accuracy was measured through a pass/fail system with two criteria; one, that all facial features had been removed and two, that no brain tissue was removed in the process. A subset of defaced scans were also run through several preprocessing pipelines to ensure that none of the algorithms would alter the resulting outputs. We found that the success rates varied strongly between defacers, with afni_refacer (89%) and pydeface (83%) having the highest rates, overall. In both cases, the primary source of failure came from a single dataset that the defacer appeared to struggle with - the youngest cohort (3–20 years) for afni_refacer and the oldest (44–85 years) for pydeface, demonstrating that defacer performance not only depends on the data provided, but that this effect varies between algorithms. While there were some very minor differences between the preprocessing results for defaced and original scans, none of these were significant and were within the range of variation between using different NIfTI converters, or using raw DICOM files

    Cortical Thickness Estimation in Individuals With Cerebral Small Vessel Disease, Focal Atrophy, and Chronic Stroke Lesions

    Get PDF
    Background: Regional changes to cortical thickness in individuals with neurodegenerative and cerebrovascular diseases (CVD) can be estimated using specialized neuroimaging software. However, the presence of cerebral small vessel disease, focal atrophy, and cortico-subcortical stroke lesions, pose significant challenges that increase the likelihood of misclassification errors and segmentation failures. Purpose: The main goal of this study was to examine a correction procedure developed for enhancing FreeSurfer’s (FS’s) cortical thickness estimation tool, particularly when applied to the most challenging MRI obtained from participants with chronic stroke and CVD, with varying degrees of neurovascular lesions and brain atrophy. Methods: In 155 CVD participants enrolled in the Ontario Neurodegenerative Disease Research Initiative (ONDRI), FS outputs were compared between a fully automated, unmodified procedure and a corrected procedure that accounted for potential sources of error due to atrophy and neurovascular lesions. Quality control (QC) measures were obtained from both procedures. Association between cortical thickness and global cognitive status as assessed by the Montreal Cognitive Assessment (MoCA) score was also investigated from both procedures. Results: Corrected procedures increased “Acceptable” QC ratings from 18 to 76% for the cortical ribbon and from 38 to 92% for tissue segmentation. Corrected procedures reduced “Fail” ratings from 11 to 0% for the cortical ribbon and 62 to 8% for tissue segmentation. FS-based segmentation of T1-weighted white matter hypointensities were significantly greater in the corrected procedure (5.8 mL vs. 15.9 mL, p \u3c 0.001). The unmodified procedure yielded no significant associations with global cognitive status, whereas the corrected procedure yielded positive associations between MoCA total score and clusters of cortical thickness in the left superior parietal (p = 0.018) and left insula (p = 0.04) regions. Further analyses with the corrected cortical thickness results and MoCA subscores showed a positive association between left superior parietal cortical thickness and Attention (p \u3c 0.001). Conclusion: These findings suggest that correction procedures which account for brain atrophy and neurovascular lesions can significantly improve FS’s segmentation results and reduce failure rates, thus maximizing power by preventing the loss of our important study participants. Future work will examine relationships between cortical thickness, cerebral small vessel disease, and cognitive dysfunction due to neurodegenerative disease in the ONDRI study

    Improved Segmentation of the Intracranial and Ventricular Volumes in Populations with Cerebrovascular Lesions and Atrophy Using 3D CNNs

    Get PDF
    Successful segmentation of the total intracranial vault (ICV) and ventricles is of critical importance when studying neurodegeneration through neuroimaging. We present iCVMapper and VentMapper, robust algorithms that use a convolutional neural network (CNN) to segment the ICV and ventricles from both single and multi-contrast MRI data. Our models were trained on a large dataset from two multi-site studies (N = 528 subjects for ICV, N = 501 for ventricular segmentation) consisting of older adults with varying degrees of cerebrovascular lesions and atrophy, which pose significant challenges for most segmentation approaches. The models were tested on 238 participants, including subjects with vascular cognitive impairment and high white matter hyperintensity burden. Two of the three test sets came from studies not used in the training dataset. We assessed our algorithms relative to four state-of-the-art ICV extraction methods (MONSTR, BET, Deep Extraction, FreeSurfer, DeepMedic), as well as two ventricular segmentation tools (FreeSurfer, DeepMedic). Our multi-contrast models outperformed other methods across many of the evaluation metrics, with average Dice coefficients of 0.98 and 0.96 for ICV and ventricular segmentation respectively. Both models were also the most time efficient, segmenting the structures in orders of magnitude faster than some of the other available methods. Our networks showed an increased accuracy with the use of a conditional random field (CRF) as a post-processing step. We further validated both segmentation models, highlighting their robustness to images with lower resolution and signal-to-noise ratio, compared to tested techniques. The pipeline and models are available at: https://icvmapp3r.readthedocs.io and https://ventmapp3r.readthedocs.io to enable further investigation of the roles of ICV and ventricles in relation to normal aging and neurodegeneration in large multi-site studies

    SCAR knockouts in Dictyostelium: WASP assumes SCAR's position and upstream regulators in pseudopods

    Get PDF
    Under normal conditions, the Arp2/3 complex activator SCAR/WAVE controls actin polymerization in pseudopods, whereas Wiskott–Aldrich syndrome protein (WASP) assembles actin at clathrin-coated pits. We show that, unexpectedly, Dictyostelium discoideum SCAR knockouts could still spread, migrate, and chemotax using pseudopods driven by the Arp2/3 complex. In the absence of SCAR, some WASP relocated from the coated pits to the leading edge, where it behaved with similar dynamics to normal SCAR, forming split pseudopods and traveling waves. Pseudopods colocalized with active Rac, whether driven by WASP or SCAR, though Rac was activated to a higher level in SCAR mutants. Members of the SCAR regulatory complex, in particular PIR121, were not required for WASP regulation. We thus show that WASP is able to respond to all core upstream signals and that regulators coupled through the other members of SCAR’s regulatory complex are not essential for pseudopod formation. We conclude that WASP and SCAR can regulate pseudopod actin using similar mechanisms

    Evaluating the effectiveness of agricultural adaptation to climate change in preindustrial society

    Get PDF
    The effectiveness of agricultural adaptation determines the vulnerability of this sector to climate change, particularly during the preindustrial era. However, this effectiveness has rarely been quantitatively evaluated, specifically at a large spatial and long-term scale. The present study covers this case of preindustrial society in AD 1500–1800. Given the absence of technological innovations in this time frame, agricultural production was chiefly augmented by cultivating more land (land input) and increasing labor input per land unit (labor input). Accordingly, these two methods are quantitatively examined. Statistical results show that within the study scale, land input is a more effective approach of mitigating climatic impact than labor input. Nonetheless, these observations collectively improve Boserup's theory from the perspective of a large spatial and long-term scale.postprin
    • …
    corecore