1,044 research outputs found
Exploring the fate of cattle herds with inconclusive reactors to the tuberculin skin test
Bovine tuberculosis (TB) is an important animal health issue in many parts of the world. In England and Wales, the primary test to detect infected animals is the single intradermal comparative cervical tuberculin test, which compares immunological responses to bovine and avian tuberculins. Inconclusive test reactors (IRs) are animals that demonstrate a positive reaction to the bovine tuberculin only marginally greater than the avian reaction, so are not classified as reactors and immediately removed. In the absence of reactors in the herd, IRs are isolated, placed under movement restrictions and re-tested after 60 days. Other animals in these herds at the time of the IR result are not usually subject to movement restrictions. This could affect efforts to control TB if undetected infected cattle move out of those herds before the next TB test. To improve our understanding of the importance of IRs, this study aimed to assess whether median survival time and the hazard of a subsequent TB incident differs in herds with only IRs detected compared with negative-testing herds. Survival analysis and extended Cox regression were used, with herds entering the study on the date of the first whole herd test in 2012. An additional analysis was performed using an alternative entry date to try to remove the impact of IR retesting and is presented in the Supplementary Material. Survival analysis showed that the median survival time among IR only herds was half that observed for clear herds (2.1 years and 4.2 years respectively; p < 0.001). Extended Cox regression analysis showed that IR-only herds had 2.7 times the hazard of a subsequent incident compared with negative-testing herds in year one (hazard ratio: 2.69; 95% CI: 2.54, 2.84; p < 0.001), and that this difference in the hazard reduced by 63% per year. After 2.7 years the difference had disappeared. The supplementary analysis supported these findings showing that IR only herds still had a greater hazard of a subsequent incident after the IR re-test, but that the effect was reduced. This emphasizes the importance of careful decision making around the management of IR animals and indicates that re-testing alone may not be sufficient to reduce the risk posed by IR only herds in England and Wales
Attitudes and Beliefs of Pig Farmers and Wild Boar Hunters Towards Reporting of African Swine Fever in Bulgaria, Germany and the Western Part of the Russian Federation
This study investigated the attitudes and beliefs of pig farmers and hunters in Germany, Bulgaria and the western part of the Russian Federation towards reporting suspected cases of African swine fever (ASF). Data were collected using a web-based questionnaire survey targeting pig farmers and hunters in these three study areas. Separate multivariable logistic regression models identified key variables associated with each of the three binary outcome variables whether or not farmers would immediately report suspected cases of ASF, whether or not hunters would submit samples from hunted wild boar for diagnostic testing and whether or not hunters would report wild boar carcasses. The results showed that farmers who would not immediately report suspected cases of ASF are more likely to believe that their reputation in the local community would be adversely affected if they were to report it, that they can control the outbreak themselves without the involvement of veterinary services and that laboratory confirmation would take too long. The modelling also indicated that hunters who did not usually submit samples of their harvested wild boar for ASF diagnosis, and hunters who did not report wild boar carcasses are more likely to justify their behaviour through a lack of awareness of the possibility of reporting. These findings emphasize the need to develop more effective communication strategies targeted at pig farmers and hunters about the disease, its epidemiology, consequences and control methods, to increase the likelihood of early reporting, especially in the Russian Federation where the virus circulate
Using geographically weighted regression to explore the spatially heterogeneous spread of bovine tuberculosis in England and Wales
An understanding of the factors that affect the spread of endemic bovine tuberculosis (bTB) is critical for the development of measures to stop and reverse this spread. Analyses of spatial data need to account for the inherent spatial heterogeneity within the data, or else spatial autocorrelation can lead to an overestimate of the significance of variables. This study used three methods of analysis—least-squares linear regression with a spatial autocorrelation term, geographically weighted regression (GWR) and boosted regression tree (BRT) analysis—to identify the factors that influence the spread of endemic bTB at a local level in England and Wales. The linear regression and GWR methods demonstrated the importance of accounting for spatial differences in risk factors for bTB, and showed some consistency in the identification of certain factors related to flooding, disease history and the presence of multiple genotypes of bTB. This is the first attempt to explore the factors associated with the spread of endemic bTB in England and Wales using GWR. This technique improves on least-squares linear regression approaches by identifying regional differences in the factors associated with bTB spread. However, interpretation of these complex regional differences is difficult and the approach does not lend itself to predictive models which are likely to be of more value to policy makers. Methods such as BRT may be more suited to such a task. Here we have demonstrated that GWR and BRT can produce comparable outputs
Analysis of among-site variation in substitution patterns
Substitution patterns among nucleotides are often assumed to be constant in phylogenetic analyses. Although variation in the average rate of substitution among sites is commonly accounted for, variation in the relative rates of specific types of substitution is not. Here, we review details of methodologies used for detecting and analyzing differences in substitution processes among predefined groups of sites. We describe how such analyses can be performed using existing phylogenetic tools, and discuss how new phylogenetic analysis tools we have recently developed can be used to provide more detailed and sensitive analyses, including study of the evolution of mutation and substitution processes. As an example we consider the mitochondrial genome, for which two types of transition deaminations (C⇒T and A⇒G) are strongly affected by single-strandedness during replication, resulting in a strand asymmetric mutation process. Since time spent single-stranded varies along the mitochondrial genome, their differential mutational response results in very different substitution patterns in different regions of the genome
Comparative Performance of Comorbidity Indices in Predicting Health Care-Related Behaviors and Outcomes among Medicaid Enrollees with Type 2 Diabetes
Abstract No single gold standard of comorbidity measure has been identified, and the performance of comorbidity indices vary according to the outcome of interest. The authors compared the Charlson Comorbidity Index, Elixhauser Index (EI), Chronic Disease Score (CDS), and Health-related Quality of Life Comorbidity Index (HRQL-CI) in predicting health care-related behaviors (physicians' concordance with diabetes care standards and patients' oral antidiabetic drug [OAD] adherence) and outcomes (health care utilization and expenditures) among Medicaid enrollees with type 2 diabetes. A total of 9832 diabetes patients who used OAD were identified using data from the MarketScan Medicaid database from 2003 to 2007. Predictive performance of the comorbidity index was assessed using multiple regression models controlling for patient demographics, diabetes severity, and baseline health care characteristics. Among the 4 indices, the CDS was best at predicting physician's concordance with care standards. The CDS and HRQL-CI mental index performed better than other indices as predictors of medication adherence. The EI was best at predicting health care utilization and expenditures. These results suggest that, for these low-income diabetes patients, the CDS and HRQL-CI mental index were relatively better risk-adjustment tools for health care-related behavior data evaluation and the EI was the first choice for health care utilization and expenditures data. (Population Health Management 2012;15:220?229)Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/98469/1/pop%2E2011%2E0037.pd
Real-time imaging of cotranscriptional splicing reveals a kinetic model that reduces noise: implications for alternative splicing regulation
A combination of several rate-limiting steps allows for efficient control of alternative splicing
APOE ε4 moderates abnormal CSF-abeta-42 levels, while neurocognitive impairment is associated with abnormal CSF tau levels in HIV+ individuals – a cross-sectional observational study
Background: Cerebrospinal fluid (CSF) biomarkers Aβ1-42, t-tau and p-tau have a characteristic pattern in Alzheimer’s Disease (AD). Their roles in HIV-associated neurocognitive disorder (HAND) remains unclear.
Methods: Adults with chronic treated HIV disease were recruited (n = 43, aged 56.7 ± 7.9; 32% aged 60+; median HIV duration 20 years, \u3e95% plasma and CSF HIV RNA \u3c50 cp/mL, on cART for a median 24 months). All underwent standard neuropsychological testing (61% had HAND), APOE genotyping (30.9% carried APOE ε4 and 7.1% were ε4 homozygotes) and a lumbar puncture. Concentrations of Aβ1-42, t-tau and p-tau were assessed in the CSF using commercial ELISAs. Current neurocognitive status was defined using the continuous Global Deficit Score, which grades impairment in clinically relevant categories. History of HAND was recorded. Univariate correlations informed multivariate models, which were corrected for nadir CD4-T cell counts and HIV duration.
Results: Carriage of APOE ε4 predicted markedly lower levels of CSF Aβ1-42 in univariate (r = -.50; p = .001) and multivariate analyses (R2 = .25; p \u3c .0003). Greater levels of neurocognitive impairment were associated with higher CSF levels of p-tau in univariate analyses (r = .32; p = .03) and multivariate analyses (R2 = .10; p = .03). AD risk prediction cut-offs incorporating all three CSF biomarkers suggested that 12.5% of participants had a high risk for AD. Having a CSF-AD like profile was more frequent in those with current (p = .05) and past HIV-associated dementia (p = .03).
Conclusions: Similarly to larger studies, APOE ε4 genotype was not directly associated with HAND, but moderated CSF levels of Aβ1-42 in a minority of participants. In the majority of participants, increased CSF p-tau levels were associated with current neurocognitive impairment. Combined CSF biomarker risk for AD in the current HIV+ sample is more than 10 times greater than in the Australian population of the same age. Larger prospective studies are warranted
Distinguishing Asthma Phenotypes Using Machine Learning Approaches.
Asthma is not a single disease, but an umbrella term for a number of distinct diseases, each of which are caused by a distinct underlying pathophysiological mechanism. These discrete disease entities are often labelled as asthma endotypes. The discovery of different asthma subtypes has moved from subjective approaches in which putative phenotypes are assigned by experts to data-driven ones which incorporate machine learning. This review focuses on the methodological developments of one such machine learning technique-latent class analysis-and how it has contributed to distinguishing asthma and wheezing subtypes in childhood. It also gives a clinical perspective, presenting the findings of studies from the past 5 years that used this approach. The identification of true asthma endotypes may be a crucial step towards understanding their distinct pathophysiological mechanisms, which could ultimately lead to more precise prevention strategies, identification of novel therapeutic targets and the development of effective personalized therapies
Environmental factors shaping the distribution of common wintering waterbirds in a lake ecosystem with developed shoreline
In this study, we tested whether the spatial distribution of waterbirds is influenced by shoreline urbanization or other habitat characteristics. We conducted monthly censuses along shoreline sections of a continental lake (Lake Balaton, Hungary) to assess the abundance of 11 common species that use this lake as a feeding and staging area during migration and winter. We estimated the degree of urbanization of the same shoreline sections and also measured other habitat characteristics (water depth, extent of reed cover, biomass of zebra mussels, distances to waste dumps and to other wetlands). We applied linear models and model averaging to identify habitat variables with high relative importance for predicting bird distributions. Bird abundance and urbanization were strongly related only in one species. Other habitat variables exhibited stronger relationships with bird distribution: (1) diving ducks and coots preferred shoreline sections with high zebra mussel biomass, (2) gulls preferred sites close to waste dumps, and (3) the abundances of several species were higher on shoreline sections close to other wetlands. Our findings suggest that the distribution of waterbirds on Lake Balaton is largely independent of shoreline urbanization and influenced by food availability and connectivity between wetlands
- …