388 research outputs found

    Th17 cells in systemic lupus erythematosus share functional features with Th17 cells from normal bone marrow and peripheral tissues

    Get PDF
    This study was designed to investigate the functional heterogeneity of human Th17 and how their plasticity shapes the nature of immune cell responses to inflammation and autoimmune diseases, such as systemic lupus erythematosus (SLE). We evaluated functional Th17 cell subsets based on the profile of cytokine production in peripheral blood (PB), bone marrow aspirates (BM) and lymph node biopsies (LN) from healthy individuals (n = 35) and PB from SLE patients (n = 34). Data were analysed by an automated method for merging and calculation of flow cytometric data, allowing us to identify eight Th17 subpopulations. Normal BM presented lower frequencies of Th17 (p = 0.006 and p = 0.05) and lower amount of IL-17 per cell (p = 0.03 and p = 0.02), compared to normal PB and LN biopsies. In the latter tissues were found increased proportions of Th17 producing TNF-α or TNF-α/IL-2 or IFN-Îł/TNF-α/IL-2, while in BM, Th17 producing other cytokines than IL-17 was clearly decreased. In SLE patients, the frequency of Th17 was higher than in control, but the levels of IL-17 per cell were significantly reduced (p < 0.05). Among the eight generated subpopulations, despite the great functional heterogeneity of Th17 in SLE, a significant low proportion of Th17 producing TNF-α was found in inactive SLE, while active SLE showed a high proportion producing only IL-17. Our findings support the idea that the functional heterogeneity of Th17 cells could depend on the cytokine microenvironment, which is distinct in normal BM as well as in active SLE, probably due to a Th1/Th2 imbalance previously reported by our group

    Fully Automated Myocardial Strain Estimation from Cardiovascular MRI–tagged Images Using a Deep Learning Framework in the UK Biobank

    Get PDF
    Purpose: To demonstrate the feasibility and performance of a fully automated deep learning framework to estimate myocardial strain from short-axis cardiac magnetic resonance tagged images. Methods and Materials: In this retrospective cross-sectional study, 4508 cases from the UK Biobank were split randomly into 3244 training and 812 validation cases, and 452 test cases. Ground truth myocardial landmarks were defined and tracked by manual initialization and correction of deformable image registration using previously validated software with five readers. The fully automatic framework consisted of 1) a convolutional neural network (CNN) for localization, and 2) a combination of a recurrent neural network (RNN) and a CNN to detect and track the myocardial landmarks through the image sequence for each slice. Radial and circumferential strain were then calculated from the motion of the landmarks and averaged on a slice basis. Results: Within the test set, myocardial end-systolic circumferential Green strain errors were -0.001 +/- 0.025, -0.001 +/- 0.021, and 0.004 +/- 0.035 in basal, mid, and apical slices respectively (mean +/- std. dev. of differences between predicted and manual strain). The framework reproduced significant reductions in circumferential strain in diabetics, hypertensives, and participants with previous heart attack. Typical processing time was ~260 frames (~13 slices) per second on an NVIDIA Tesla K40 with 12GB RAM, compared with 6-8 minutes per slice for the manual analysis. Conclusions: The fully automated RNNCNN framework for analysis of myocardial strain enabled unbiased strain evaluation in a high-throughput workflow, with similar ability to distinguish impairment due to diabetes, hypertension, and previous heart attack.Comment: accepted in Radiology Cardiothoracic Imagin

    The BINGO project: V. Further steps in component separation and bispectrum analysis

    Get PDF
    Context. Observing the neutral hydrogen distribution across the Universe via redshifted 21 cm line intensity mapping constitutes a powerful probe for cosmology. However, the redshifted 21 cm signal is obscured by the foreground emission from our Galaxy and other extragalactic foregrounds. This paper addresses the capabilities of the BINGO survey to separate such signals. Aims. We show that the BINGO instrumental, optical, and simulations setup is suitable for component separation, and that we have the appropriate tools to understand and control foreground residuals. Specifically, this paper looks in detail at the different residuals left over by foreground components, shows that a noise-corrected spectrum is unbiased, and shows that we understand the remaining systematic residuals by analyzing nonzero contributions to the three-point function. Methods. We use the generalized needlet internal linear combination, which we apply to sky simulations of the BINGO experiment for each redshift bin of the survey. We use binned estimates of the bispectrum of the maps to assess foreground residuals left over after component separation in the final map. Results. We present our recovery of the redshifted 21 cm signal from sky simulations of the BINGO experiment, including foreground components. We test the recovery of the 21 cm signal through the angular power spectrum at different redshifts, as well as the recovery of its non-Gaussian distribution through a bispectrum analysis. We find that non-Gaussianities from the original foreground maps can be removed down to, at least, the noise limit of the BINGO survey with such techniques. Conclusions. Our component separation methodology allows us to subtract the foreground contamination in the BINGO channels down to levels below the cosmological signal and the noise, and to reconstruct the 21 cm power spectrum for different redshift bins without significant loss at multipoles 20â‰Č l â‰Č 500. Our bispectrum analysis yields strong tests of the level of the residual foreground contamination in the recovered 21 cm signal, thereby allowing us to both optimize and validate our component separation analysis

    Two-Particle-Self-Consistent Approach for the Hubbard Model

    Full text link
    Even at weak to intermediate coupling, the Hubbard model poses a formidable challenge. In two dimensions in particular, standard methods such as the Random Phase Approximation are no longer valid since they predict a finite temperature antiferromagnetic phase transition prohibited by the Mermin-Wagner theorem. The Two-Particle-Self-Consistent (TPSC) approach satisfies that theorem as well as particle conservation, the Pauli principle, the local moment and local charge sum rules. The self-energy formula does not assume a Migdal theorem. There is consistency between one- and two-particle quantities. Internal accuracy checks allow one to test the limits of validity of TPSC. Here I present a pedagogical review of TPSC along with a short summary of existing results and two case studies: a) the opening of a pseudogap in two dimensions when the correlation length is larger than the thermal de Broglie wavelength, and b) the conditions for the appearance of d-wave superconductivity in the two-dimensional Hubbard model.Comment: Chapter in "Theoretical methods for Strongly Correlated Systems", Edited by A. Avella and F. Mancini, Springer Verlag, (2011) 55 pages. Misprint in Eq.(23) corrected (thanks D. Bergeron
    • 

    corecore