178 research outputs found

    Exploiting the Richness of Environmental Waterborne Bacterial Species to Find Natural Legionella pneumophila Competitors

    Get PDF
    Legionella pneumophila is one of the most tracked waterborne pathogens and remains an important threat to human health. Despite the use of biocides, L. pneumophila is able to persist in engineered water systems with the help of multispecies biofilms and phagocytic protists. For few years now, high-throughput sequencing methods have enabled a better understanding of microbial communities in freshwater environments. Those unexplored and complex communities compete for nutrients using antagonistic molecules as war weapons. Up to now, few of these molecules were characterized in regards of L. pneumophila sensitivity. In this context, we established, from five freshwater environments, a vast collection of culturable bacteria and investigated their ability to inhibit the growth of L. pneumophila. All bacterial isolates were classified within 4 phyla, namely Proteobacteria (179/273), Bacteroidetes (48/273), Firmicutes (43/273), and Actinobacteria (3/273) according to 16S rRNA coding sequences. Aeromonas, Bacillus, Flavobacterium, and Pseudomonas were the most abundant genera (154/273). Among the 273 isolates, 178 (65.2%) were shown to be active against L. pneumophila including 137 isolates of the four previously cited main genera. Additionally, other less represented genera depicted anti-Legionella activity such as Acinetobacter, Kluyvera, Rahnella, or Sphingobacterium. Furthermore, various inhibition diameters were observed among active isolates, ranging from 0.4 to 9 cm. Such variability suggests the presence of numerous and diverse natural compounds in the microenvironment of L. pneumophila. These molecules include both diffusible secreted compounds and volatile organic compounds, the latter being mainly produced by Pseudomonas strains. Altogether, this work sheds light on unexplored freshwater bacterial communities that could be relevant for the biological control of L. pneumophila in manmade water systems

    Feasibility of a multiple-choice mini mental state examination for chronically critically ill patients:

    Get PDF
    Objectives: Following treatment in an ICU, up to 70% of chronically critically ill patients present neurocognitive impairment that can have negative effects on their quality of life, daily activities, and return to work. The Mini Mental State Examination is a simple, widely used tool for neurocognitive assessment. Although of interest when evaluating ICU patients, the current version is restricted to patients who are able to speak. This study aimed to evaluate the feasibility of a visual, multiple-choice Mini Mental State Examination for ICU patients who are unable to speak.Design: The multiple-choice Mini Mental State Examination and the standard Mini Mental State Examination were compared across three different speaking populations. The interrater and intrarater reliabilities of the multiple-choice Mini Mental State Examination were tested on both intubated and tracheostomized ICU patients.Setting: Mixed 36-bed ICU and neuropsychology department in a university hospital.Subjects: Twenty-six healthy volunteers, 20 neurological patients, 46 ICU patients able to speak, and 30 intubated or tracheostomized ICU patients.Interventions: None.Measurements and Main Results: Multiple-choice Mini Mental State Examination results correlated satisfactorily with standard Mini Mental State Examination results in all three speaking groups: healthy volunteers: intraclass correlation coefficient = 0.43 (95% CI, –0.18 to 0.62); neurology patients: 0.90 (95% CI, 0.82–0.95); and ICU patients able to speak: 0.86 (95% CI, 0.70–0.92). The interrater and intrarater reliabilities were good (0.95 [0.87–0.98] and 0.94 [0.31–0.99], respectively). In all populations, a Bland-Altman analysis showed systematically higher scores using the multiple-choice Mini Mental State Examination.Conclusions: Administration of the multiple-choice Mini Mental State Examination to ICU patients was straightforward and produced exploitable results comparable to those of the standard Mini Mental State Examination. It should be of interest for the assessment and monitoring of the neurocognitive performance of chronically critically ill patients during and after their ICU stay. The multiple-choice Mini Mental State Examination tool’s role in neurorehabilitation and its utility in monitoring neurocognitive functions in ICU should be assessed in future studies

    Not All Missed Doses Are the Same: Sustained NNRTI Treatment Interruptions Predict HIV Rebound at Low-to-Moderate Adherence Levels

    Get PDF
    Background: While the relationship between average adherence to HIV potent antiretroviral therapy is well defined, the relationship between patterns of adherence within adherence strata has not been investigated. We examined medication event monitoring system (MEMS) defined adherence patterns and their relation to subsequent virologic rebound. Methods and Results: We selected subjects with at least 3-months of previous virologic suppression on a non-nucleoside reverse transcriptase inhibitor (NNRTI)-based regimen from two prospective cohorts in France and North America. We assessed the risk of virologic rebound, defined as HIV RNA of >400 copies/mL according to several MEMS adherence measurements. Seventy two subjects were studied, five of them experienced virologic rebound. Subjects with and without virologic rebound had similar baseline characteristics including treatment durations, regimen (efavirenz vs nevirapine), and dosing schedule. Each 10% increase in average adherence decreased the risk of virologic rebound (OR = 0.56; 95% confidence interval (CI) [0.37, 0.81], P<0.002). Each additional consecutive day off therapy for the longest treatment interruption (OR = 1.34; 95%CI [1.15, 1.68], P<0.0001) and each additional treatment interruption for more than 2 days (OR = 1.38; 95%CI [1.13, 1.77], P<0.002) increased the risk of virologic rebound. In those with low-to-moderate adherence (i.e. <80%), treatment interruption duration (16.2 days versus 6.1 days in the control group, P<0.02), but not average adherence (53.1% vs 55.9%, respectively, P = 0.65) was significantly associated with virologic rebound. Conclusions: Sustained treatment interruption may pose a greater risk of virologic rebound on NNRTI therapy than the same number of interspersed missed doses at low-to-moderate adherence

    The impact of iron supplementation efficiency in female blood donors with a decreased ferritin level and no anaemia. Rationale and design of a randomised controlled trial: a study protocol

    Get PDF
    ABSTRACT: BACKGROUND: There is no recommendation to screen ferritin level in blood donors, even though several studies have noted the high prevalence of iron deficiency after blood donation, particularly among menstruating females. Furthermore, some clinical trials have shown that non-anaemic women with unexplained fatigue may benefit from iron supplementation. Our objective is to determine the clinical effect of iron supplementation on fatigue in female blood donors without anaemia, but with a mean serum ferritin </= 30 ng/ml. METHODS/DESIGN: In a double blind randomised controlled trial, we will measure blood count and ferritin level of women under age 50 yr, who donate blood to the University Hospital of Lausanne Blood Transfusion Department, at the time of the donation and after 1 week. One hundred and forty donors with a ferritin level </= 30 ng/ml and haemoglobin level >/= 120 g/l (non-anaemic) a week after the donation will be included in the study and randomised. A one-month course of oral ferrous sulphate (80 mg/day of elemental iron) will be introduced vs. placebo. Self-reported fatigue will be measured using a visual analogue scale. Secondary outcomes are: score of fatigue (Fatigue Severity Scale), maximal aerobic power (Chester Step Test), quality of life (SF-12), and mood disorders (Prime-MD). Haemoglobin and ferritin concentration will be monitored before and after the intervention. DISCUSSION: Iron deficiency is a potential problem for all blood donors, especially menstruating women. To our knowledge, no other intervention study has yet evaluated the impact of iron supplementation on subjective symptoms after a blood donation. TRIAL REGISTRATION: NCT00689793

    The prognosis of allocentric and egocentric neglect : evidence from clinical scans

    Get PDF
    We contrasted the neuroanatomical substrates of sub-acute and chronic visuospatial deficits associated with different aspects of unilateral neglect using computed tomography scans acquired as part of routine clinical diagnosis. Voxel-wise statistical analyses were conducted on a group of 160 stroke patients scanned at a sub-acute stage. Lesion-deficit relationships were assessed across the whole brain, separately for grey and white matter. We assessed lesions that were associated with behavioural performance (i) at a sub-acute stage (within 3 months of the stroke) and (ii) at a chronic stage (after 9 months post stroke). Allocentric and egocentric neglect symptoms at the sub-acute stage were associated with lesions to dissociated regions within the frontal lobe, amongst other regions. However the frontal lesions were not associated with neglect at the chronic stage. On the other hand, lesions in the angular gyrus were associated with persistent allocentric neglect. In contrast, lesions within the superior temporal gyrus extending into the supramarginal gyrus, as well as lesions within the basal ganglia and insula, were associated with persistent egocentric neglect. Damage within the temporo-parietal junction was associated with both types of neglect at the sub-acute stage and 9 months later. Furthermore, white matter disconnections resulting from damage along the superior longitudinal fasciculus were associated with both types of neglect and critically related to both sub-acute and chronic deficits. Finally, there was a significant difference in the lesion volume between patients who recovered from neglect and patients with chronic deficits. The findings presented provide evidence that (i) the lesion location and lesion size can be used to successfully predict the outcome of neglect based on clinical CT scans, (ii) lesion location alone can serve as a critical predictor for persistent neglect symptoms, (iii) wide spread lesions are associated with neglect symptoms at the sub-acute stage but only some of these are critical for predicting whether neglect will become a chronic disorder and (iv) the severity of behavioural symptoms can be a useful predictor of recovery in the absence of neuroimaging findings on clinical scans. We discuss the implications for understanding the symptoms of the neglect syndrome, the recovery of function and the use of clinical scans to predict outcome

    Methodological standards in non-inferiority AIDS trials: moving from adherence to compliance

    Get PDF
    BACKGROUND: The interpretation of the results of active-control trials regarding the efficacy and safety of a new drug is important for drug registration and following clinical use. It has been suggested that non-inferiority and equivalence studies are not reported with the same quantitative rigor as superiority studies. METHODS: Standard methodological criteria for non-inferiority and equivalence trials including design, analysis and interpretation issues were applied to 18 recently conducted large non-inferiority (15) and equivalence (3) randomized trials in the field of AIDS antiretroviral therapy. We used the continuity-corrected non-inferiority chi-square to test 95% confidence interval treatment difference against the predefined non-inferiority margin. RESULTS: The pre-specified non-inferiority margin ranged from 10% to 15%. Only 4 studies provided justification for their choice. 39% of the studies (7/18) reported only intent-to-treat (ITT) analysis for the primary endpoint. When on-treatment (OT) and ITT statistical analyses were provided, ITT was favoured over OT for results interpretation for all but one study, inappropriately in this statistical context. All but two of the studies concluded there was "similar" efficacy of the experimental group. However, 9/18 had inconclusive results for non-inferiority. CONCLUSION: Conclusions about non-inferiority should be drawn on the basis of the confidence interval analysis of an appropriate primary endpoint, using the predefined criteria for non-inferiority, in both OT and ITT, in compliance with the non-inferiority and equivalence CONSORT statement. We suggest that the use of the non-inferiority chi-square test may provide additional useful information

    Clinical evaluation of iron treatment efficiency among non-anemic but iron-deficient female blood donors: a randomized controlled trial

    Get PDF
    ABSTRACT: Iron deficiency without anemia (IDWA) is related to adverse symptoms that can be relieved by supplementation. Since a blood donation can induce such an iron deficiency, we investigated the clinical impact of an iron treatment after blood donation. METHODS: One week after donation, we randomly assigned 154 female donors with IDWA aged <50 years to a 4-week oral treatment of ferrous sulfate vs. placebo. The main outcome was the change in the level of fatigue before and after the intervention. Also evaluated were aerobic capacity, mood disorder, quality of life, compliance and adverse events. Biological markers were hemoglobin and ferritin. RESULTS: Treatment effect from baseline to 4 weeks for hemoglobin and ferritin were 5.2 g/L (p < 0.01) and 14.8 ng/mL (p < 0.01) respectively. No significant clinical effect was observed for fatigue (-0.15 points, 95% confidence interval -0.9 to 0.6, p = 0.697) or for other outcomes. Compliance and interruption for side effects was similar in both groups. Additionally, blood donation did not induce overt symptoms of fatigue in spite of the significant biological changes it produces. CONCLUSIONS: These data are valuable as they enable us to conclude that donors with IDWA after a blood donation would not clinically benefit from iron supplementation. Trial registration: NCT00689793

    Training in infectious diseases across Europe in 2021 - a survey on training delivery, content and assessment

    Get PDF
    Objectives: To define the status of infectious diseases (ID) as an approved specialty in Europe; to enumerate the number of specialists (in general and in relation to the overall population) and specialist trainees and describe the content, delivery and evaluation of postgraduate training in ID in different countries.Methods: Structured web-based questionnaire surveys in March 2021 of responsible national authorities, specialist societies and individual country representatives to the Section of Infectious Diseases of the European Union for Medical Specialties. Descriptive analysis of quantitative and qualitative responses.Results: In responses received from 33/35 (94.3%) countries, ID is recognized as a specialty in 24 and as a subspecialty of general internal medicine (GIM) in eight, but it is not recognized in Spain. The number of ID specialists per country varies from <5 per million inhabitants to 78 per million inhabitants. Median length of training is 5 years (interquartile range 4.0–6.0 years) with variable amounts of preceding and/or concurrent GIM. Only 21.2% of countries (7/33) provide the minimum recommended training of 6 months in microbiology and 30% cover competencies such as palliative care, team working and leadership, audit, and quality control. Training is monitored by personal logbook or e-portfolio in 75.8% (25/33) and assessed by final examinations in 69.7% (23/33) of countries, but yearly reviews with trainees only occur in 54.5% (18/33) of countries.Conclusions: There are substantial gaps in modernization of ID training in many countries to match current European training requirements. Joint training with clinical microbiology (CM) and in multidisciplinary team working should be extended. Training/monitoring trainers should find greater focus, together with regular feedback to trainees within many national training programmes.peer-reviewe
    corecore