836 research outputs found

    Comparison of respiratory disease prevalence among voluntary monitoring systems for pig health and welfare in the UK

    Get PDF
    Surveillance of animal diseases provides information essential for the protection of animal health and ultimately public health. The voluntary pig health schemes, implemented in the United Kingdom, are integrated systems which capture information on different macroscopic disease conditions detected in slaughtered pigs. Many of these conditions have been associated with a reduction in performance traits and consequent increases in production costs. The schemes are the Wholesome Pigs Scotland in Scotland, the BPEX Pig Health Scheme in England and Wales and the Pig Regen Ltd. health and welfare checks done in Northern Ireland. This report set out to compare the prevalence of four respiratory conditions (enzootic pneumonia-like lesions, pleurisy, pleuropneumonia lesions and abscesses in the lung) assessed by these three Pig Health Schemes. The seasonal variations and year trends associated with the conditions in each scheme are presented. The paper also highlights the differences in prevalence for each condition across these schemes and areas where further research is needed. A general increase in the prevalence of enzootic pneumonia like lesions was observed in Scotland, England and Wales since 2009, while a general decrease was observed in Northern Ireland over the years of the scheme. Pleurisy prevalence has increased since 2010 in all three schemes, whilst pleuropneumonia has been decreasing. Prevalence of abscesses in the lung has decreased in England, Wales and Northern Ireland but has increased in Scotland. This analysis highlights the value of surveillance schemes based on abattoir pathology monitoring of four respiratory lesions. The outputs at scheme level have significant value as indicators of endemic and emerging disease, and for producers and herd veterinarians in planning and evaluating herd health control programs when comparing individual farm results with national averages

    Terrorism in Australia: factors associated with perceived threat and incident-critical behaviours

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To help improve incident preparedness this study assessed socio-demographic and socio-economic predictors of perceived risk of terrorism within Australia and willingness to comply with public safety directives during such incidents.</p> <p>Methods</p> <p>The terrorism perception question module was incorporated into the New South Wales Population Health Survey and was completed by a representative sample of 2,081 respondents in early 2007. Responses were weighted against the New South Wales population.</p> <p>Results</p> <p>Multivariate analyses indicated that those with no formal educational qualifications were significantly more likely (OR = 2.10, 95%CI:1.32–3.35, p < 0.001) to think that a terrorist attack is very or extremely likely to occur in Australia and also more likely (OR = 3.62, 95%CI:2.25–5.83, p < 0.001) to be very or extremely concerned that they or a family member would be directly affected, compared to those with a university-level qualification. Speaking a language other than English at home predicted high concern (very/extremely) that self or family would be directly affected (OR = 3.02, 95%CI:2.02–4.53, p < 0.001) and was the strongest predictor of having made associated changes in living (OR = 3.27, 95%CI:2.17–4.93, p < 0.001). Being female predicted willingness to evacuate from public facilities. Speaking a language other than English at home predicted low willingness to evacuate.</p> <p>Conclusion</p> <p>Low education level is a risk factor for high terrorism risk perception and concerns regarding potential impacts. The pattern of concern and response among those of migrant background may reflect secondary social impacts associated with heightened community threat, rather than the direct threat of terrorism itself. These findings highlight the need for terrorism risk communication and related strategies to address the specific concerns of these sub-groups as a critical underpinning of population-level preparedness.</p

    Association of MC1R Variants and host phenotypes with melanoma risk in CDKN2A mutation carriers: a GenoMEL study

    Get PDF
    &lt;p&gt;&lt;b&gt;Background&lt;/b&gt; Carrying the cyclin-dependent kinase inhibitor 2A (CDKN2A) germline mutations is associated with a high risk for melanoma. Penetrance of CDKN2A mutations is modified by pigmentation characteristics, nevus phenotypes, and some variants of the melanocortin-1 receptor gene (MC1R), which is known to have a role in the pigmentation process. However, investigation of the associations of both MC1R variants and host phenotypes with melanoma risk has been limited.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Methods&lt;/b&gt; We included 815 CDKN2A mutation carriers (473 affected, and 342 unaffected, with melanoma) from 186 families from 15 centers in Europe, North America, and Australia who participated in the Melanoma Genetics Consortium. In this family-based study, we assessed the associations of the four most frequent MC1R variants (V60L, V92M, R151C, and R160W) and the number of variants (1, &#8805;2 variants), alone or jointly with the host phenotypes (hair color, propensity to sunburn, and number of nevi), with melanoma risk in CDKN2A mutation carriers. These associations were estimated and tested using generalized estimating equations. All statistical tests were two-sided.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Results&lt;/b&gt; Carrying any one of the four most frequent MC1R variants (V60L, V92M, R151C, R160W) in CDKN2A mutation carriers was associated with a statistically significantly increased risk for melanoma across all continents (1.24 × 10−6 &#8804; P &#8804; .0007). A consistent pattern of increase in melanoma risk was also associated with increase in number of MC1R variants. The risk of melanoma associated with at least two MC1R variants was 2.6-fold higher than the risk associated with only one variant (odds ratio = 5.83 [95% confidence interval = 3.60 to 9.46] vs 2.25 [95% confidence interval = 1.44 to 3.52]; Ptrend = 1.86 × 10−8). The joint analysis of MC1R variants and host phenotypes showed statistically significant associations of melanoma risk, together with MC1R variants (.0001 &#8804; P &#8804; .04), hair color (.006 &#8804; P &#8804; .06), and number of nevi (6.9 × 10−6 &#8804; P &#8804; .02).&lt;/p&gt; &lt;p&gt;&lt;b&gt;Conclusion&lt;/b&gt; Results show that MC1R variants, hair color, and number of nevi were jointly associated with melanoma risk in CDKN2A mutation carriers. This joint association may have important consequences for risk assessments in familial settings.&lt;/p&gt

    A study of soft tissue sarcomas after childhood cancer in Britain

    Get PDF
    Among 16 541 3-year survivors of childhood cancer in Britain, 39 soft tissue sarcomas (STSs) occurred and 1.1 sarcomas were expected, yielding a standardised incidence ratio (SIR) of 16.1. When retinoblastomas were excluded from the cohort, the SIR for STSs was 15.9, and the cumulative risk of developing a soft tissue tumour after childhood cancer within 20 years of 3-year survival was 0.23%. In the case–control study, there was a significant excess of STSs in those patients exposed to both radiotherapy (RT) and chemotherapy, which was five times that observed among those not exposed (P=0.02). On the basis of individual radiation dosimetry, there was evidence of a strong dose–response effect with a significant increase in the risk of STS with increasing dose of RT (P<0.001). This effect remained significant in a multivariate model. The adjusted risk in patients exposed to RT doses of over 3000 cGy was over 50 times the risk in the unexposed. There was evidence of a dose–response effect with exposure to alkylating agents, the risk increasing substantially with increasing cumulative dose (P=0.05). This effect remained after adjusting for the effect of radiation exposure

    The challenges of transferring chronic illness patients to adult care: reflections from pediatric and adult rheumatology at a US academic center

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Little is known about the transfer of care process from pediatric to adult rheumatology for patients with chronic rheumatic disease. The purpose of this study is to examine changes in disease status, treatment and health care utilization among adolescents transferring to adult care at the University of California San Francisco (UCSF).</p> <p>Methods</p> <p>We identified 31 eligible subjects who transferred from pediatric to adult rheumatology care at UCSF between 1995–2005. Subject demographics, disease characteristics, disease activity and health care utilization were compared between the year prior to and the year following transfer of care.</p> <p>Results</p> <p>The mean age at the last pediatric rheumatology visit was 19.5 years (17.4–22.0). Subject diagnoses included systemic lupus erythematosus (52%), mixed connective tissue disease (16%), juvenile idiopathic arthritis (16%), antiphospholipid antibody syndrome (13%) and vasculitis (3%). Nearly 30% of subjects were hospitalized for disease treatment or management of flares in the year prior to transfer, and 58% had active disease at the time of transfer. In the post-transfer period, almost 30% of subjects had an increase in disease activity. One patient died in the post-transfer period. The median transfer time between the last pediatric and first adult rheumatology visit was 7.1 months (range 0.7–33.6 months). Missed appointments were common in the both the pre and post transfer period.</p> <p>Conclusion</p> <p>A significant percentage of patients who transfer from pediatric to adult rheumatology care at our center are likely to have active disease at the time of transfer, and disease flares are common during the transfer period. These findings highlight the importance of a seamless transfer of care between rheumatology providers.</p

    Sequential application of hyperspectral indices for delineation of stripe rust infection and nitrogen deficiency in wheat

    Full text link
    © 2015, Springer Science+Business Media New York. Nitrogen (N) fertilization is crucial for the growth and development of wheat crops, and yet increased use of N can also result in increased stripe rust severity. Stripe rust infection and N deficiency both cause changes in foliar physiological activity and reduction in plant pigments that result in chlorosis. Furthermore, stripe rust produce pustules on the leaf surface which similar to chlorotic regions have a yellow color. Quantifying the severity of each factor is critical for adopting appropriate management practices. Eleven widely-used vegetation indices, based on mathematic combinations of narrow-band optical reflectance measurements in the visible/near infrared wavelength range were evaluated for their ability to discriminate and quantify stripe rust severity and N deficiency in a rust-susceptible wheat variety (H45) under varying conditions of nitrogen status. The physiological reflectance index (PhRI) and leaf and canopy chlorophyll index (LCCI) provided the strongest correlation with levels of rust infection and N-deficiency, respectively. When PhRI and LCCI were used in a sequence, both N deficiency and rust infection levels were correctly classified in 82.5 and 55 % of the plots at Zadoks growth stage 47 and 75, respectively. In misclassified plots, an overestimation of N deficiency was accompanied by an underestimation of the rust infection level or vice versa. In 18 % of the plots, there was a tendency to underestimate the severity of stripe rust infection even though the N-deficiency level was correctly predicted. The contrasting responses of the PhRI and LCCI to stripe rust infection and N deficiency, respectively, and the relative insensitivity of these indices to the other parameter makes their use in combination suitable for quantifying levels of stripe rust infection and N deficiency in wheat crops under field conditions

    A descriptive model of patient readiness, motivators, and hepatitis C treatment uptake among Australian prisoners

    Get PDF
    Background: Hepatitis C virus infection (HCV) has a significant global health burden with an estimated 2%–3% of the world's population infected, and more than 350,000 dying annually from HCV-related conditions including liver failure and liver cancer. Prisons potentially offer a relatively stable environment in which to commence treatment as they usually provide good access to health care providers, and are organised around routine and structure. Uptake of treatment of HCV, however, remains low in the community and in prisons. In this study, we explored factors affecting treatment uptake inside prisons and hypothesised that prisoners have unique issues influencing HCV treatment uptake as a consequence of their incarceration which are not experienced in other populations. Method and Findings: We undertook a qualitative study exploring prisoners' accounts of why they refused, deferred, delayed or discontinued HCV treatment in prison. Between 2010 and 2013, 116 Australian inmates were interviewed from prisons in New South Wales, Queensland, and Western Australia. Prisoners experienced many factors similar to those which influence treatment uptake of those living with HCV infection in the community. Incarceration, however, provides different circumstances of how these factors are experienced which need to be better understood if the number of prisoners receiving treatment is to be increased. We developed a descriptive model of patient readiness and motivators for HCV treatment inside prisons and discussed how we can improve treatment uptake among prisoners.Conclusion: This study identified a broad and unique range of challenges to treatment of HCV in prison. Some of these are likely to be diminished by improving treatment options and improved models of health care delivery. Other barriers relate to inmate understanding of their illness and stigmatisation by other inmates and custodial staff and generally appear less amenable to change although there is potential for peer-based education to address lack of knowledge and stigma

    X-ray Absorption and Reflection in Active Galactic Nuclei

    Full text link
    X-ray spectroscopy offers an opportunity to study the complex mixture of emitting and absorbing components in the circumnuclear regions of active galactic nuclei, and to learn about the accretion process that fuels AGN and the feedback of material to their host galaxies. We describe the spectral signatures that may be studied and review the X-ray spectra and spectral variability of active galaxies, concentrating on progress from recent Chandra, XMM-Newton and Suzaku data for local type 1 AGN. We describe the evidence for absorption covering a wide range of column densities, ionization and dynamics, and discuss the growing evidence for partial-covering absorption from data at energies > 10 keV. Such absorption can also explain the observed X-ray spectral curvature and variability in AGN at lower energies and is likely an important factor in shaping the observed properties of this class of source. Consideration of self-consistent models for local AGN indicates that X-ray spectra likely comprise a combination of absorption and reflection effects from material originating within a few light days of the black hole as well as on larger scales. It is likely that AGN X-ray spectra may be strongly affected by the presence of disk-wind outflows that are expected in systems with high accretion rates, and we describe models that attempt to predict the effects of radiative transfer through such winds, and discuss the prospects for new data to test and address these ideas.Comment: Accepted for publication in the Astronomy and Astrophysics Review. 58 pages, 9 figures. V2 has fixed an error in footnote
    corecore