911 research outputs found

    EXTRAVERTED CHILDREN SWIM FASTER COMPARED TO INTROVERTED COUNTERPARTS REGARDLESS OF LIGHT AND SOUND NOISE LEVELS

    Get PDF
    Individual differences of personality are thought to influence motor performance. In terms of cortical arousal levels, because extraverts are infra-activated and introverts are hyper-activated, environment stimuli might enhance the impact of the extraversion trait on task performance. This study investigated the effect of light and sound noise on the swimming performance of extraverted and introverted children. 19 extraverts (12 boys, 7 girls) and 22 introverts (12 boys, 10 girls), ages 8.2 ± 0.9 years, adapted to water and swimming at intermediate levels. Participants performed two trials of the task (swimming 15 meters as fast as possible in crawl style) under two environment conditions: bright light/loud noise (A) and dim light/slight noise (B). Movements were filmed to allow calculation of time to complete the task and the stroke cycle. There was a significant effect for the group factor, with extraverts swimming faster than introverts. No effect was detected for the environment factor or the interaction group/environment. Regarding stroke cycle, no differences were found for group, environment or interaction. Although extraversion has not affected mechanical aspects of crawl style, compared to introverts, extraverts swan faster, showing a more effective process of reacting and executing movements in time-constraints tasks.Diferenças individuais de personalidade podem influenciar o desempenho motor. Em termos de ativação cortical, porque extrovertidos são infra-ativados e introvertidos hiperativados, os estímulos ambientais podem aumentar o efeito do traço de extroversão ao desempenhar tarefas. O presente estudo investigou o efeito da luminosidade e do ruído sonoro no desempenho natatório de crianças extrovertidas (19; 12 meninos, 7 meninas) e introvertidos (22; 12 meninos, 10 meninas), com idade de 8.2 ± 0.9 anos, adaptadas à água e com nível intermediário de natação. As crianças executaram duas tentativas da tarefa (nadar 15 metros o mais depressa possível em estilo crawl) sob duas condições ambientais: luz forte/ruído alto (A) e luz fraca/ruído baixo (B). Os movimentos foram filmados para cálculo de tempo para completar a tarefa e de ciclo de braçada. Houve efeito significativo para o fator grupo, com extrovertidos nadando mais rapidamente que introvertidos. Não houve efeito para o fator ambiente ou interação grupo/ambiente. Quanto ao ciclo de braçada, não houve diferenças para qualquer fator ou interação. Embora a extroversão não tenha afetado aspectos mecânicos do nado crawl, comparados aos introvertidos, os extrovertidos nadaram mais rapidamente, o que demonstra um processo mais efetivo para reagir e executar movimentos com restrições de tempo

    Altered cardiac structure and function is related to seizure frequency in a rat model of chronic acquired temporal lobe epilepsy

    Get PDF
    Objective: This study aimed to prospectively examine cardiac structure and function in the kainic acid-induced post-status epilepticus (post-KA SE) model of chronic acquired temporal lobe epilepsy (TLE), specifically to examine for changes between the pre-epileptic, early epileptogenesis and the chronic epilepsy stages. We also aimed to examine whether any changes related to the seizure frequency in individual animals. Methods: Four hours of SE was induced in 9 male Wistar rats at 10 weeks of age, with 8 saline treated matched control rats. Echocardiography was performed prior to the induction of SE, two- and 10-weeks post-SE. Two weeks of continuous video-EEG and simultaneous ECG recordings were acquired for two weeks from 11 weeks post-KA SE. The video-EEG recordings were analyzed blindly to quantify the number and severity of spontaneous seizures, and the ECG recordings analyzed for measures of heart rate variability (HRV). PicroSirius red histology was performed to assess cardiac fibrosis, and intracellular Ca2+ levels and cell contractility were measured by microfluorimetry. Results: All 9 post-KA SE rats were demonstrated to have spontaneous recurrent seizures on the two-week video-EEG recording acquired from 11 weeks SE (seizure frequency ranging from 0.3 to 10.6 seizures/day with the seizure durations from 11 to 62 s), and none of the 8 control rats. Left ventricular wall thickness was thinner, left ventricular internal dimension was shorter, and ejection fraction was significantly decreased in chronically epileptic rats, and was negatively correlated to seizure frequency in individual rats. Diastolic dysfunction was evident in chronically epileptic rats by a decrease in mitral valve deceleration time and an increase in E/E` ratio. Measures of HRV were reduced in the chronically epileptic rats, indicating abnormalities of cardiac autonomic function. Cardiac fibrosis was significantly increased in epileptic rats, positively correlated to seizure frequency, and negatively correlated to ejection fraction. The cardiac fibrosis was not a consequence of direct effect of KA toxicity, as it was not seen in the 6/10 rats from separate cohort that received similar doses of KA but did not go into SE. Cardiomyocyte length, width, volume, and rate of cell lengthening and shortening were significantly reduced in epileptic rats. Significance: The results from this study demonstrate that chronic epilepsy in the post-KA SE rat model of TLE is associated with a progressive deterioration in cardiac structure and function, with a restrictive cardiomyopathy associated with myocardial fibrosis. Positive correlations between seizure frequency and the severity of the cardiac changes were identified. These results provide new insights into the pathophysiology of cardiac disease in chronic epilepsy, and may have relevance for the heterogeneous mechanisms that place these people at risk of sudden unexplained death

    Plasma Lead Concentration and Risk of Late Kidney Allograft Failure:Findings From the TransplantLines Biobank and Cohort Studies

    Get PDF
    Rationale &amp; Objective: Heavy metals are known to induce kidney damage, and recent studies have linked minor exposures to cadmium and arsenic with increased risk of kidney allograft failure, yet the potential association of lead with late graft failure in kidney transplant recipients (KTRs) remains unknown. Study Design: Prospective cohort study in The Netherlands. Setting &amp; Participants: We studied outpatient KTRs (n = 670) with a functioning graft for ≥1 year recruited at a university setting (2008-2011) and followed for a median of 4.9 (interquartile range, 3.4-5.5) years. Additionally, patients with chronic kidney disease (n = 46) enrolled in the ongoing TransplantLines Cohort and Biobank Study (2016-2017, ClinicalTrials.gov identifier NCT03272841) were studied at admission for transplant and at 3, 6, 12, and 24 months after transplant. Exposure: Plasma lead concentration was log2-transformed to estimate the association with outcomes per doubling of plasma lead concentration and also considered categorically as tertiles of lead distribution. Outcome: Kidney graft failure (restart of dialysis or repeat transplant) with the competing event of death with a functioning graft. Analytical Approach: Multivariable-adjusted cause-specific hazards models in which follow-up of KTRs who died with a functioning graft was censored. Results: Median baseline plasma lead concentration was 0.31 (interquartile range, 0.22-0.45) μg/L among all KTRs. During follow-up, 78 (12%) KTRs experienced graft failure. Higher plasma lead concentration was associated with increased risk of graft failure (hazard ratio, 1.59 [95% CI, 1.14-2.21] per doubling; P = 0.006) independent of age, sex, transplant characteristics, estimated glomerular filtration rate, proteinuria, smoking status, alcohol intake, and plasma concentrations of cadmium and arsenic. These findings remained materially unchanged after additional adjustment for dietary intake and were consistent with those of analyses examining lead categorically. In serial measurements, plasma lead concentration was significantly higher at admission for transplant than at 3 months after transplant (P = 0.001), after which it remained stable over 2 years of follow-up (P = 0.2). Limitations: Observational study design. Conclusions: Pretransplant plasma lead concentrations, which decrease after transplant, are associated with increased risk of late kidney allograft failure. These findings warrant further studies to evaluate whether preventive or therapeutic interventions to decrease plasma lead concentration may represent novel risk-management strategies to decrease the rate of kidney allograft failure.</p

    Plasma Lead Concentration and Risk of Late Kidney Allograft Failure:Findings From the TransplantLines Biobank and Cohort Studies

    Get PDF
    Rationale &amp; Objective: Heavy metals are known to induce kidney damage, and recent studies have linked minor exposures to cadmium and arsenic with increased risk of kidney allograft failure, yet the potential association of lead with late graft failure in kidney transplant recipients (KTRs) remains unknown. Study Design: Prospective cohort study in The Netherlands. Setting &amp; Participants: We studied outpatient KTRs (n = 670) with a functioning graft for ≥1 year recruited at a university setting (2008-2011) and followed for a median of 4.9 (interquartile range, 3.4-5.5) years. Additionally, patients with chronic kidney disease (n = 46) enrolled in the ongoing TransplantLines Cohort and Biobank Study (2016-2017, ClinicalTrials.gov identifier NCT03272841) were studied at admission for transplant and at 3, 6, 12, and 24 months after transplant. Exposure: Plasma lead concentration was log2-transformed to estimate the association with outcomes per doubling of plasma lead concentration and also considered categorically as tertiles of lead distribution. Outcome: Kidney graft failure (restart of dialysis or repeat transplant) with the competing event of death with a functioning graft. Analytical Approach: Multivariable-adjusted cause-specific hazards models in which follow-up of KTRs who died with a functioning graft was censored. Results: Median baseline plasma lead concentration was 0.31 (interquartile range, 0.22-0.45) μg/L among all KTRs. During follow-up, 78 (12%) KTRs experienced graft failure. Higher plasma lead concentration was associated with increased risk of graft failure (hazard ratio, 1.59 [95% CI, 1.14-2.21] per doubling; P = 0.006) independent of age, sex, transplant characteristics, estimated glomerular filtration rate, proteinuria, smoking status, alcohol intake, and plasma concentrations of cadmium and arsenic. These findings remained materially unchanged after additional adjustment for dietary intake and were consistent with those of analyses examining lead categorically. In serial measurements, plasma lead concentration was significantly higher at admission for transplant than at 3 months after transplant (P = 0.001), after which it remained stable over 2 years of follow-up (P = 0.2). Limitations: Observational study design. Conclusions: Pretransplant plasma lead concentrations, which decrease after transplant, are associated with increased risk of late kidney allograft failure. These findings warrant further studies to evaluate whether preventive or therapeutic interventions to decrease plasma lead concentration may represent novel risk-management strategies to decrease the rate of kidney allograft failure.</p

    Plasma Lead Concentration and Risk of Late Kidney Allograft Failure:Findings From the TransplantLines Biobank and Cohort Studies

    Get PDF
    RATIONALE & OBJECTIVE: Heavy metals are known to induce kidney damage and recent studies have linked minor exposures to cadmium and arsenic with increased risk of kidney allograft failure, yet the potential association of lead (Pb) with late graft failure in kidney transplant recipients (KTR) remains unknown. STUDY DESIGN: Prospective cohort study in the Netherlands. SETTING & PARTICIPANTS: We studied outpatient KTR (n=670) with a functioning graft for ≥1 year recruited at a university setting (2008-2011, NCT02811835) and followed, on average, for 4.9 (IQR, 3.4‒5.5) years. Additionally, end-stage kidney disease patients (n=46) enrolled in the ongoing TransplantLines Cohort and Biobank Study (2016-2017, NCT03272841) were studied at admission for transplantation and at 3, 6, 12, and 24 months after transplantation. EXPOSURE: Plasma Pb was log2 transformed to estimate the association with outcomes per doubling of plasma Pb concentration and also considered categorically as tertiles of the Pb distribution. OUTCOME: Kidney graft failure (restart of dialysis or re-transplantation) with the competing event of death with a functioning graft. ANALYTICAL APPROACH: Multivariable-adjusted cause-specific hazards models where follow-up of KTR who died with a functioning graft was censored. RESULTS: Median baseline plasma Pb was 0.31 (IQR, 0.22─0.45) μg/L among all KTRs. During follow-up, 78 (12%) KTR developed graft failure. Higher plasma Pb was associated with increased risk of graft failure (HR 1.59, 95% CI 1.14‒2.21 per doubling; P=0.006) independent of age, sex, transplant characteristics, eGFR, proteinuria, smoking status, alcohol intake, and plasma concentrations of cadmium and arsenic. These findings remained materially unchanged after additional adjustment for dietary intake and were consistent with those of analyses examining Pb categorically. In serial measurements, plasma Pb was significantly higher at admission for transplantation than at 3-months post-transplant (P=0.001), after which it remained stable over 2 years of follow-up (P=0.2). LIMITATIONS: Observational study design. CONCLUSIONS: Pretransplant plasma Pb concentrations, which fall after transplantation, are associated with increased risk of late kidney allograft failure. These findings warrant further studies to evaluate whether preventive or therapeutic interventions to decrease plasma Pb may represent novel risk-management strategies to decrease the rate of kidney allograft failure

    The effect of previous SARS-CoV-2 infection on systemic immune responses in individuals with tuberculosis

    Get PDF
    BackgroundThe impact of previous SARS-CoV-2 infection on the systemic immune response during tuberculosis (TB) disease has not been explored.MethodsAn observational, cross-sectional cohort was established to evaluate the systemic immune response in persons with pulmonary tuberculosis with or without previous SARS-CoV-2 infection. Those participants were recruited in an outpatient referral clinic in Rio de Janeiro, Brazil. TB was defined as a positive Xpert-MTB/RIF Ultra and/or a positive culture of Mycobacterium tuberculosis from sputum. Stored plasma was used to perform specific serology to identify previous SARS-CoV-2 infection (TB/Prex-SCoV-2 group) and confirm the non- infection of the tuberculosis group (TB group). Plasmatic cytokine/chemokine/growth factor profiling was performed using Luminex technology. Tuberculosis severity was assessed by clinical and laboratory parameters. Participants from TB group (4.55%) and TB/Prex-SCoV-2 (0.00%) received the complete COVID-19 vaccination.ResultsAmong 35 participants with pulmonary TB, 22 were classified as TB/Prex-SCoV-2. The parameters associated with TB severity, together with hematologic and biochemical data were similar between the TB and TB/Prex-SCoV-2 groups. Among the signs and symptoms, fever and dyspnea were significantly more frequent in the TB group than the TB/Prex-SCoV-2 group (p &lt; 0,05). A signature based on lower amount of plasma EGF, G-CSF, GM-CSF, IFN-α2, IL-12(p70), IL-13, IL-15, IL-17, IL-1β, IL-5, IL-7, and TNF-β was observed in the TB/Prex-SCoV-2 group. In contrast, MIP-1β was significantly higher in the TB/Prex-SCoV-2 group than the TB group.ConclusionTB patients previously infected with SARS-CoV-2 had an immunomodulation that was associated with lower plasma concentrations of soluble factors associated with systemic inflammation. This signature was associated with a lower frequency of symptoms such as fever and dyspnea but did not reflect significant differences in TB severity parameters observed at baseline

    Efficient Capture of Infected Neutrophils by Dendritic Cells in the Skin Inhibits the Early Anti-Leishmania Response

    Get PDF
    Neutrophils and dendritic cells (DCs) converge at localized sites of acute inflammation in the skin following pathogen deposition by the bites of arthropod vectors or by needle injection. Prior studies in mice have shown that neutrophils are the predominant recruited and infected cells during the earliest stage of Leishmania major infection in the skin, and that neutrophil depletion promotes host resistance to sand fly transmitted infection. How the massive influx of neutrophils aimed at wound repair and sterilization might modulate the function of DCs in the skin has not been previously addressed. The infected neutrophils recovered from the skin expressed elevated apoptotic markers compared to uninfected neutrophils, and were preferentially captured by dermal DCs when injected back into the mouse ear dermis. Following challenge with L. major directly, the majority of the infected DCs recovered from the skin at 24 hr stained positive for neutrophil markers, indicating that they acquired their parasites via uptake of infected neutrophils. When infected, dermal DCs were recovered from neutrophil depleted mice, their expression of activation markers was markedly enhanced, as was their capacity to present Leishmania antigens ex vivo. Neutrophil depletion also enhanced the priming of L. major specific CD4+ T cells in vivo. The findings suggest that following their rapid uptake by neutrophils in the skin, L. major exploits the immunosuppressive effects associated with the apoptotic cell clearance function of DCs to inhibit the development of acquired resistance until the acute neutrophilic response is resolved

    Multitrophic Interaction in the Rhizosphere of Maize: Root Feeding of Western Corn Rootworm Larvae Alters the Microbial Community Composition

    Get PDF
    BACKGROUND: Larvae of the Western Corn Rootworm (WCR) feeding on maize roots cause heavy economical losses in the US and in Europe. New or adapted pest management strategies urgently require a better understanding of the multitrophic interaction in the rhizosphere. This study aimed to investigate the effect of WCR root feeding on the microbial communities colonizing the maize rhizosphere. METHODOLOGY/PRINCIPAL FINDINGS: In a greenhouse experiment, maize lines KWS13, KWS14, KWS15 and MON88017 were grown in three different soil types in presence and in absence of WCR larvae. Bacterial and fungal community structures were analyzed by denaturing gradient gel electrophoresis (DGGE) of the 16S rRNA gene and ITS fragments, PCR amplified from the total rhizosphere community DNA. DGGE bands with increased intensity were excised from the gel, cloned and sequenced in order to identify specific bacteria responding to WCR larval feeding. DGGE fingerprints showed that the soil type and the maize line influenced the fungal and bacterial communities inhabiting the maize rhizosphere. WCR larval feeding affected the rhiyosphere microbial populations in a soil type and maize line dependent manner. DGGE band sequencing revealed an increased abundance of Acinetobacter calcoaceticus in the rhizosphere of several maize lines in all soil types upon WCR larval feeding. CONCLUSION/SIGNIFICANCE: The effects of both rhizosphere and WCR larval feeding seemed to be stronger on bacterial communities than on fungi. Bacterial and fungal community shifts in response to larval feeding were most likely due to changes of root exudation patterns. The increased abundance of A. calcoaceticus suggested that phenolic compounds were released upon WCR wounding
    • …
    corecore