17 research outputs found

    Mixed Th1 and Th2 Mycobacterium tuberculosis-specific CD4 T cell responses in patients with active pulmonary tuberculosis from Tanzania.

    Get PDF
    Mycobacterium tuberculosis (Mtb) and helminth infections elicit antagonistic immune effector functions and are co-endemic in several regions of the world. We therefore hypothesized that helminth infection may influence Mtb-specific T-cell immune responses. We evaluated the cytokine profile of Mtb-specific T cells in 72 individuals with pulmonary TB disease recruited from two Sub-Saharan regions with high and moderate helminth burden i.e. 55 from Tanzania (TZ) and 17 from South Africa (SA), respectively. We showed that Mtb-specific CD4 T-cell functional profile of TB patients from Tanzania are primarily composed of polyfunctional Th1 and Th2 cells, associated with increased expression of Gata-3 and reduced expression of T-bet in memory CD4 T cells. In contrast, the cytokine profile of Mtb-specific CD4 T cells of TB patients from SA was dominated by single IFN-γ and dual IFN-γ/TNF-α and associated with TB-induced systemic inflammation and elevated serum levels of type I IFNs. Of note, the proportion of patients with Mtb-specific CD8 T cells was significantly reduced in Mtb/helminth co-infected patients from TZ. It is likely that the underlying helminth infection and possibly genetic and other unknown environmental factors may have caused the induction of mixed Th1/Th2 Mtb-specific CD4 T cell responses in patients from TZ. Taken together, these results indicate that the generation of Mtb-specific CD4 and CD8 T cell responses may be substantially influenced by environmental factors in vivo. These observations may have major impact in the identification of immune biomarkers of disease status and correlates of protection

    Apprentissage interactif de la statistique : le cours SMEL

    No full text
    National audienc

    Quantifying Inter-Subject Agreement in Brain-Imaging Analyses

    No full text
    Contains fulltext : 55794.pdf (Publisher’s version ) (Closed access

    A Representational Similarity Analysis of the Dynamics of Object Processing Using Single-Trial EEG Classification.

    No full text
    The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes

    Category-level classification results.

    No full text
    <p>All electrodes and time samples of the brain response were used together in the six-class category-level classification. An equal number of observations from each category were used. Left: Confusion matrix showing proportions of classifier output. Rows represent actual labels and columns represent predicted labels. Values along the diagonal indicate proportions of correct classifications. Mean accuracy for this classification was 40.68%, compared to chance-level accuracy of 16.67% (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0135697#pone.0135697.g002" target="_blank">Fig 2</a>). Middle: Multidimensional scaling (MDS) plot derived from the confusion matrix, visualizing the non-hierarchical structure of the representational space. MDS dimensions are sorted in descending order of variance explained. Right: Dendrogram visualizing the hierarchical structure of the representation. The Human Face category is most separate from the other categories, while the two Inanimate categories form the tightest category cluster.</p

    Multidimensional scaling plots for exemplar-level classification.

    No full text
    <p>MDS coordinates were derived from the 72-class confusion matrix (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0135697#pone.0135697.g007" target="_blank">Fig 7</a>). (A) The first four MDS dimensions are scatterplotted in pairs of dimensions. Boxplots show the distribution of image exemplar coordinates along each dimension, grouped by the category labels used previously (as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0135697#pone.0135697.g003" target="_blank">Fig 3</a>). (B) Statistical significance of category separability along each of the four principal MDS dimensions plotted in (A). Nonparametric tests were performed on exemplar coordinates for MDS Dimensions 1–4 to assess category separability; all category pairs except for the two Inanimate categories are separable at the <i>α</i> = 0.01 level along at least one of the four principal MDS dimensions.</p

    Exemplar-level classification results.

    No full text
    <p>The classifier attempted to predict image exemplar labels from brain responses in a 72-class classification. Mean accuracy for the classification was 14.46%. (A) Line plot of the proportion of correct classifications for each of the 72 image exemplars. (B) Confusion matrix from the classification. The matrix diagonal, visualized in (A), has been set to zero for better display of off-diagonal elements.</p

    Summary table of classification results.

    No full text
    <p>Classifier accuracies, along with <i>p</i>-value (p), effect size (d), and sample standard deviation (s) across the ten participants, for classifications incorporating data from all electrodes into the feature vector. Classifications using all time points together are shown in the “0–496 ms” column. Temporally resolved classifications are shown in subsequent columns. Chance level for six-class (category-level) classifications was 1/6 = 16.67%; for 72-class (exemplar-level) was 1/72 = 1.39%; for twelve-class (within-category) was 1/12 = 8.33%; and for two-class (between-category) was 1/2 = 50.00%. Statistical significance and effect size were calculated under the null distribution of the Binomial distribution based upon the number of observations in one test fold. Some classifications could not be performed for certain participants’ data due to the SVD not converging in the computation of Principal Components. Results are from all ten participants unless otherwise indicated: † indicates nine participants; ‡ indicates eight participants; ⊕ indicates seven participants. A ⋆ indicates missing data from one participant in our results file.</p

    Stimulus set used in the experiment.

    No full text
    <p>The 72 images used in this study include twelve images from each of six categories: Human Body, Human Face, Animal Body, Animal Face, Fruit Vegetable, and Inanimate Object. The stimuli can be divided most broadly into Animate and Inanimate categories. Within the Animate category, images are either Human or Animal and Body or Face. Inanimate images are either Natural or Man-made. Colored borders are added for visualization purposes only, and were not shown during experimental sessions.</p
    corecore