448 research outputs found

    Impact of foot-and-mouth disease on mastitis and culling on a large-scale dairy farm in Kenya

    Get PDF
    Foot and mouth disease (FMD) is a highly transmissible viral infection of cloven hooved animals associated with severe economic losses when introduced into FMD-free countries. Information on the impact of the disease in FMDV-endemic countries is poorly characterised yet essential for the prioritisation of scarce resources for disease control programmes. A FMD (virus serotype SAT2) outbreak on a large-scale dairy farm in Nakuru County, Kenya provided an opportunity to evaluate the impact of FMD on clinical mastitis and culling rate. A cohort approach followed animals over a 12-month period after the commencement of the outbreak. For culling, all animals were included; for mastitis, those over 18 months of age. FMD was recorded in 400/644 cattle over a 29-day period. During the follow-up period 76 animals were culled or died whilst in the over 18 month old cohort 63 developed clinical mastitis. Hazard ratios (HR) were generated using Cox regression accounting for non-proportional hazards by inclusion of time-varying effects. Univariable analysis showed FMD cases were culled sooner but there was no effect on clinical mastitis. After adjusting for possible confounders and inclusion of time-varying effects there was weak evidence to support an effect of FMD on culling (HR = 1.7, 95% confidence intervals [CI] 0.88-3.1, P = 0.12). For mastitis, there was stronger evidence of an increased rate in the first month after the onset of the outbreak (HR = 2.9, 95%CI 0.97-8.9, P = 0.057)

    The NRG1 exon 11 missense variant is not associated with autism in the Central Valley of Costa Rica

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We are conducting a genetic study of autism in the isolated population of the Central Valley of Costa Rica (CVCR). A novel Neuregulin 1 (NRG1) missense variant (exon 11 G>T) was recently associated with psychosis and schizophrenia (SCZ) in the same population isolate.</p> <p>Methods</p> <p>We genotyped the NRG1 exon 11 missense variant in 146 cases with autism, or autism spectrum disorder, with CVCR ancestry, and both parents when available (N = 267 parents) from 143 independent families. Additional microsatellites were genotyped to examine haplotypes bearing the exon 11 variant.</p> <p>Results</p> <p>The NRG1 exon 11 G>T variant was found in 4/146 cases including one de novo occurrence. The frequency of the variant in case chromosomes was 0.014 and 0.045 in the parental non-transmitted chromosomes. At least 6 haplotypes extending 0.229 Mb were associated with the T allele. Three independent individuals, with no personal or family history of psychiatric disorder, shared at least a 1 megabase haplotype 5' to the T allele.</p> <p>Conclusion</p> <p>The NRG1 exon 11 missense variant is not associated with autism in the CVCR.</p

    Roles for Treg expansion and HMGB1 signaling through the TLR1-2-6 axis in determining the magnitude of the antigen-specific immune response to MVA85A

    Get PDF
    © 2013 Matsumiya et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are creditedA better understanding of the relationships between vaccine, immunogenicity and protection from disease would greatly facilitate vaccine development. Modified vaccinia virus Ankara expressing antigen 85A (MVA85A) is a novel tuberculosis vaccine candidate designed to enhance responses induced by BCG. Antigen-specific interferon-γ (IFN-γ) production is greatly enhanced by MVA85A, however the variability between healthy individuals is extensive. In this study we have sought to characterize the early changes in gene expression in humans following vaccination with MVA85A and relate these to long-term immunogenicity. Two days post-vaccination, MVA85A induces a strong interferon and inflammatory response. Separating volunteers into high and low responders on the basis of T cell responses to 85A peptides measured during the trial, an expansion of circulating CD4+ CD25+ Foxp3+ cells is seen in low but not high responders. Additionally, high levels of Toll-like Receptor (TLR) 1 on day of vaccination are associated with an increased response to antigen 85A. In a classification model, combined expression levels of TLR1, TICAM2 and CD14 on day of vaccination and CTLA4 and IL2Rα two days post-vaccination can classify high and low responders with over 80% accuracy. Furthermore, administering MVA85A in mice with anti-TLR2 antibodies may abrogate high responses, and neutralising antibodies to TLRs 1, 2 or 6 or HMGB1 decrease CXCL2 production during in vitro stimulation with MVA85A. HMGB1 is released into the supernatant following atimulation with MVA85A and we propose this signal may be the trigger activating the TLR pathway. This study suggests an important role for an endogenous ligand in innate sensing of MVA and demonstrates the importance of pattern recognition receptors and regulatory T cell responses in determining the magnitude of the antigen specific immune response to vaccination with MVA85A in humans.This work was funded by the Wellcome Trust. MM has a Wellcome Trust PhD studentship and HM is a Wellcome Trust Senior Fello

    Ultrasound-guided diagnostic breast biopsy methodology: retrospective comparison of the 8-gauge vacuum-assisted biopsy approach versus the spring-loaded 14-gauge core biopsy approach

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Ultrasound-guided diagnostic breast biopsy technology represents the current standard of care for the evaluation of indeterminate and suspicious lesions seen on diagnostic breast ultrasound. Yet, there remains much debate as to which particular method of ultrasound-guided diagnostic breast biopsy provides the most accurate and optimal diagnostic information. The aim of the current study was to compare and contrast the 8-gauge vacuum-assisted biopsy approach and the spring-loaded 14-gauge core biopsy approach.</p> <p>Methods</p> <p>A retrospective analysis was done of all ultrasound-guided diagnostic breast biopsy procedures performed by either the 8-gauge vacuum-assisted biopsy approach or the spring-loaded 14-gauge core biopsy approach by a single surgeon from July 2001 through June 2009.</p> <p>Results</p> <p>Among 1443 ultrasound-guided diagnostic breast biopsy procedures performed, 724 (50.2%) were by the 8-gauge vacuum-assisted biopsy technique and 719 (49.8%) were by the spring-loaded 14-gauge core biopsy technique. The total number of false negative cases (i.e., benign findings instead of invasive breast carcinoma) was significantly greater (P = 0.008) in the spring-loaded 14-gauge core biopsy group (8/681, 1.2%) as compared to in the 8-gauge vacuum-assisted biopsy group (0/652, 0%), with an overall false negative rate of 2.1% (8/386) for the spring-loaded 14-gauge core biopsy group as compared to 0% (0/148) for the 8-gauge vacuum-assisted biopsy group. Significantly more (P < 0.001) patients in the spring-loaded 14-gauge core biopsy group (81/719, 11.3%) than in the 8-gauge vacuum-assisted biopsy group (18/724, 2.5%) were recommended for further diagnostic surgical removal of additional tissue from the same anatomical site of the affected breast in an immediate fashion for indeterminate/inconclusive findings seen on the original ultrasound-guided diagnostic breast biopsy procedure. Significantly more (P < 0.001) patients in the spring-loaded 14-gauge core biopsy group (54/719, 7.5%) than in the 8-gauge vacuum-assisted biopsy group (9/724, 1.2%) personally requested further diagnostic surgical removal of additional tissue from the same anatomical site of the affected breast in an immediate fashion for a benign finding seen on the original ultrasound-guided diagnostic breast biopsy procedure.</p> <p>Conclusions</p> <p>In appropriately selected cases, the 8-gauge vacuum-assisted biopsy approach appears to be advantageous to the spring-loaded 14-gauge core biopsy approach for providing the most accurate and optimal diagnostic information.</p

    Detection of Pneumocystis DNA in samples from patients suspected of bacterial pneumonia- a case-control study

    Get PDF
    BACKGROUND: Pneumocystis jiroveci (formerly known as P. carinii f.sp. hominis) is an opportunistic fungus that causes Pneumocystis pneumonia (PCP) in immunocompromised individuals. Pneumocystis jiroveci can be detected by polymerase chain reaction (PCR). To investigate the clinical importance of a positive Pneumocystis-PCR among HIV-uninfected patients suspected of bacterial pneumonia, a retrospective matched case-control study was conducted. METHODS: Respiratory samples from 367 patients suspected of bacterial pneumonia were analysed by PCR amplification of Pneumocystis jiroveci. To compare clinical factors associated with carriage of P. jiroveci, a case-control study was done. For each PCR-positive case, four PCR-negative controls, randomly chosen from the PCR-negative patients, were matched on sex and date of birth. RESULTS: Pneumocystis-DNA was detected in 16 (4.4%) of patients. The median age for PCR-positive patients was higher than PCR-negative patients (74 vs. 62 years, p = 0.011). PCR-positive cases had a higher rate of chronic or severe concomitant illness (15 (94%)) than controls (32 (50%)) (p = 0.004). Twelve (75%) of the 16 PCR positive patients had received corticosteroids, compared to 8 (13%) of the 64 PCR-negative controls (p < 0.001). Detection of Pneumocystis-DNA was associated with a worse prognosis: seven (44%) of patients with positive PCR died within one month compared to nine (14%) of the controls (p = 0.01). None of the nine PCR-positive patients who survived had developed PCP at one year of follow-up. CONCLUSIONS: Our data suggest that carriage of Pneumocystis jiroveci is associated with old age, concurrent disease and steroid treatment. PCR detection of P. jiroveci has low specificity for diagnosing PCP among patients without established immunodeficiency. Whether overt infection is involved in the poorer prognosis or merely reflects sub-clinical carriage is not clear. Further studies of P. jiroveci in patients receiving systemic treatment with corticosteroids are warranted

    Attitudes towards terminal sedation: an empirical survey among experts in the field of medical ethics

    Get PDF
    BACKGROUND: "Terminal sedation" regarded as the use of sedation in (pre-)terminal patients with treatment-refractory symptoms is controversially discussed not only within palliative medicine. While supporters consider terminal sedation as an indispensable palliative medical treatment option, opponents disapprove of it as "slow euthanasia". Against this background, we interviewed medical ethics experts by questionnaire on the term and the moral acceptance of terminal sedation in order to find out how they think about this topic. We were especially interested in whether experts with a professional medical and nursing background think differently about the topic than experts without this background. METHODS: The survey was carried out by questionnaire; beside the provided answering options free text comments were possible. As test persons we chose the 477 members of the German Academy for Ethics in Medicine, an interdisciplinary society for medical ethics. RESULTS: 281 completed questionnaires were returned (response rate = 59%). The majority of persons without medical background regarded "terminal sedation" as an intentional elimination of consciousness until the patient's death occurs; persons with a medical background generally had a broader understanding of the term, including light or intermittent forms of sedation. 98% of the respondents regarded terminal sedation in dying patients with treatment-refractory physical symptoms as acceptable. Situations in which the dying process has not yet started, in which untreatable mental symptoms are the indication for terminal sedation or in which life-sustaining measures are withdrawn during sedation were evaluated as morally difficult. CONCLUSION: The survey reveals a great need for research and discussion on the medical indication as well as on the moral evaluation of terminal sedation. Prerequisite for this is a more precise terminology which describes the circumstances of the sedation

    Evidence for waning of latency in a cohort study of tuberculosis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To investigate how the risk of active tuberculosis disease is influenced by time since original infection and to determine whether the risk of reactivation of tuberculosis increases or decreases with age.</p> <p>Methods</p> <p>Cohort analysis of data for the separate ten year birth cohorts of 1876-1885 to 1959-1968 obtained from Statistics Norway and the National Tuberculosis Registry. These data were used to calculate the rates and the changes in the rates of bacillary (or active) tuberculosis. Data on bacillary tuberculosis for adult (20+) age groups were obtained from the National Tuberculosis Registry and Statistics Norway from 1946 to 1974. Most cases during this period arose due to reactivation of remote infection. Participants in this part of the analysis were all reported active tuberculosis cases in Norway from 1946 to 1974 as recorded in the National Tuberculosis Registry.</p> <p>Results</p> <p>Tuberculosis decreased at a relatively steady rate when following individual birth cohorts, but with a tendency of slower decline as time passed since infection. A mean estimate of this rate of decline was 57% in a 10 year period.</p> <p>Conclusions</p> <p>The risk of reactivation of latent tuberculosis decreases with age. This decline may reflect the rate at which latent tuberculosis is eliminated from a population with minimal transmission of tubercle bacilli. A model for risk of developing active tuberculosis as a function of time since infection shows that the rate at which tuberculosis can be eliminated from a society can be quite substantial if new infections are effectively prevented. The findings clearly indicate that preventative measures against transmission of tuberculosis will be the most effective. These results also suggest that the total population harbouring live tubercle bacilli and consequently the future projection for increased incidence of tuberculosis in the world is probably overestimated.</p

    How to handle mortality when investigating length of hospital stay and time to clinical stability

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Hospital length of stay (LOS) and time for a patient to reach clinical stability (TCS) have increasingly become important outcomes when investigating ways in which to combat Community Acquired Pneumonia (CAP). Difficulties arise when deciding how to handle in-hospital mortality. Ad-hoc approaches that are commonly used to handle time to event outcomes with mortality can give disparate results and provide conflicting conclusions based on the same data. To ensure compatibility among studies investigating these outcomes, this type of data should be handled in a consistent and appropriate fashion.</p> <p>Methods</p> <p>Using both simulated data and data from the international Community Acquired Pneumonia Organization (CAPO) database, we evaluate two ad-hoc approaches for handling mortality when estimating the probability of hospital discharge and clinical stability: 1) restricting analysis to those patients who lived, and 2) assigning individuals who die the "worst" outcome (right-censoring them at the longest recorded LOS or TCS). Estimated probability distributions based on these approaches are compared with right-censoring the individuals who died at time of death (the complement of the Kaplan-Meier (KM) estimator), and treating death as a competing risk (the cumulative incidence estimator). Tests for differences in probability distributions based on the four methods are also contrasted.</p> <p>Results</p> <p>The two ad-hoc approaches give different estimates of the probability of discharge and clinical stability. Analysis restricted to patients who survived is conceptually problematic, as estimation is conditioned on events that happen <it>at a future time</it>. Estimation based on assigning those patients who died the worst outcome (longest LOS and TCS) coincides with the complement of the KM estimator based on the subdistribution hazard, which has been previously shown to be equivalent to the cumulative incidence estimator. However, in either case the time to in-hospital mortality is ignored, preventing simultaneous assessment of patient mortality in addition to LOS and/or TCS. The power to detect differences in underlying hazards of discharge between patient populations differs for test statistics based on the four approaches, and depends on the underlying hazard ratio of mortality between the patient groups.</p> <p>Conclusions</p> <p>Treating death as a competing risk gives estimators which address the clinical questions of interest, and allows for simultaneous modelling of both in-hospital mortality and TCS / LOS. This article advocates treating mortality as a competing risk when investigating other time related outcomes.</p

    Endemicity response timelines for Plasmodium falciparum elimination

    Get PDF
    Background: The scaling up of malaria control and renewed calls for malaria eradication have raised interest in defining timelines for changes in malaria endemicity. Methods: The epidemiological theory for the decline in the Plasmodium falciparum parasite rate (PfPR, the prevalence of infection) following intervention was critically reviewed and where necessary extended to consider superinfection, heterogenous biting, and aging infections. Timelines for malaria control and elimination under different levels of intervention were then established using a wide range of candidate mathematical models. Analysis focused on the timelines from baseline to 1% and from 1% through the final stages of elimination. Results: The Ross-Macdonald model, which ignores superinfection, was used for planning during the Global Malaria Eradication Programme (GMEP). In models that consider superinfection, PfPR takes two to three years longer to reach 1% starting from a hyperendemic baseline, consistent with one of the few large-scale malaria control trials conducted in an African population with hyperendemic malaria. The time to elimination depends fundamentally upon the extent to which malaria transmission is interrupted and the size of the human population modelled. When the PfPR drops below 1%, almost all models predict similar and proportional declines in PfPR in consecutive years from 1% through to elimination and that the waiting time to reduce PfPR from 10% to 1% and from 1% to 0.1% are approximately equal, but the decay rate can increase over time if infections senesce. Conclusion: The theory described herein provides simple "rules of thumb" and likely time horizons for the impact of interventions for control and elimination. Starting from a hyperendemic baseline, the GMEP planning timelines, which were based on the Ross-Macdonald model with completely interrupted transmission, were inappropriate for setting endemicity timelines and they represent the most optimistic scenario for places with lower endemicity. Basic timelines from PfPR of 1% through elimination depend on population size and low-level transmission. These models provide a theoretical basis that can be further tailored to specific control and elimination scenarios
    corecore