16 research outputs found

    Modelling Eurasian Beaver Foraging Habitat and Dam Suitability, for Predicting the Location and Number of Dams Throughout Catchments in Great Britain

    Get PDF
    Eurasian beaver (Castor fiber) populations are expanding across Europe. Depending on location, beaver dams bring multiple benefits and/or require management. Using nationally available data, we developed: a Beaver Forage Index (BFI), identifying beaver foraging habitat, and a Beaver Dam Capacity (BDC) model, classifying suitability of river reaches for dam construction, to estimate location and number of dams at catchment scales. Models were executed across three catchments, in Great Britain (GB), containing beaver. An area of 6747 km2 was analysed for BFI and 16,739 km of stream for BDC. Field surveys identified 258 km of channel containing beaver activity and 89 dams, providing data to test predictions. Models were evaluated using a categorical binomial Bayesian framework to calculate probability of foraging and dam construction. BFI and BDC models successfully categorised the use of reaches for foraging and damming, with higher scoring reaches being preferred. Highest scoring categories were ca. 31 and 79 times more likely to be used than the lowest for foraging and damming respectively. Zero-inflated negative binomial regression showed that modelled dam capacity was significantly related (p = 0.01) to observed damming and was used to predict numbers of dams that may occur. Estimated densities of dams, averaged across each catchment, ranged from 0.4 to 1.6 dams/km, though local densities may be up to 30 dams/km. These models provide fundamental information describing the distribution of beaver foraging habitat, where dams may be constructed and how many may occur. This supports the development of policy and management concerning the reintroduction and recolonisation of beaver

    An evidence-based assessment of the past distribution of Golden and White-tailed Eagles across Wales

    Get PDF
    Two species of eagles (Golden and White‐tailed) bred in Wales during prehistoric and historic times and became regionally extinct as breeding species in the mid‐1800s. They are iconic and charismatic, and discussions about reintroducing them back into the Welsh landscape have been ongoing for years. Reintroductions, however, can be risky, costly and/or contentious. To address these concerns, and to judge whether it is appropriate to reintroduce a regionally extinct species; the “International Union for Conservation of Nature (IUCN)” have produced criteria by which a proposed reintroduction can be assessed. A key criterion is that the potential reintroduction location lies within the former range of the species. In this study, we addressed this criterion by assessing the past distributions of Golden and White‐tailed Eagles within Wales. Using historic observational data, fossil/archaeological records and evidence from place‐names in the Welsh language, we demonstrated strong evidence for the presence of both of these eagle species in Wales in pre‐historic and historic times. We used kernel density functions to model the likely core distributions of each species within Wales. The resulting core distributions encompassed much of central and west‐north Wales for both species, with the White‐tailed Eagle exhibiting a wider core distribution extending into south Wales. Our results fill knowledge gaps regarding the historic ranges of both species in Britain, and support the future restoration of either or both species to Wales

    Intended Consequences Statement in Conservation Science and Practice

    Get PDF
    As the biodiversity crisis accelerates, the stakes are higher for threatened plants and animals. Rebuilding the health of our planet will require addressing underlying threats at many scales, including habitat loss and climate change. Conservation interventions such as habitat protection, management, restoration, predator control, trans location, genetic rescue, and biological control have the potential to help threatened or endangered species avert extinction. These existing, well-tested methods can be complemented and augmented by more frequent and faster adoption of new technologies, such as powerful new genetic tools. In addition, synthetic biology might offer solutions to currently intractable conservation problems. We believe that conservation needs to be bold and clear-eyed in this moment of great urgency

    Clinical Utility of Random Anti–Tumor Necrosis Factor Drug–Level Testing and Measurement of Antidrug Antibodies on the Long-Term Treatment Response in Rheumatoid Arthritis

    Get PDF
    Objective: To investigate whether antidrug antibodies and/or drug non-trough levels predict the long-term treatment response in a large cohort of patients with rheumatoid arthritis (RA) treated with adalimumab or etanercept and to identify factors influencing antidrug antibody and drug levels to optimize future treatment decisions.  Methods: A total of 331 patients from an observational prospective cohort were selected (160 patients treated with adalimumab and 171 treated with etanercept). Antidrug antibody levels were measured by radioimmunoassay, and drug levels were measured by enzyme-linked immunosorbent assay in 835 serial serum samples obtained 3, 6, and 12 months after initiation of therapy. The association between antidrug antibodies and drug non-trough levels and the treatment response (change in the Disease Activity Score in 28 joints) was evaluated.  Results: Among patients who completed 12 months of followup, antidrug antibodies were detected in 24.8% of those receiving adalimumab (31 of 125) and in none of those receiving etanercept. At 3 months, antidrug antibody formation and low adalimumab levels were significant predictors of no response according to the European League Against Rheumatism (EULAR) criteria at 12 months (area under the receiver operating characteristic curve 0.71 [95% confidence interval (95% CI) 0.57, 0.85]). Antidrug antibody–positive patients received lower median dosages of methotrexate compared with antidrug antibody–negative patients (15 mg/week versus 20 mg/week; P = 0.01) and had a longer disease duration (14.0 versus 7.7 years; P = 0.03). The adalimumab level was the best predictor of change in the DAS28 at 12 months, after adjustment for confounders (regression coefficient 0.060 [95% CI 0.015, 0.10], P = 0.009). Etanercept levels were associated with the EULAR response at 12 months (regression coefficient 0.088 [95% CI 0.019, 0.16], P = 0.012); however, this difference was not significant after adjustment. A body mass index of ≥30 kg/m2 and poor adherence were associated with lower drug levels.  Conclusion: Pharmacologic testing in anti–tumor necrosis factor–treated patients is clinically useful even in the absence of trough levels. At 3 months, antidrug antibodies and low adalimumab levels are significant predictors of no response according to the EULAR criteria at 12 months

    BHPR research: qualitative1. Complex reasoning determines patients' perception of outcome following foot surgery in rheumatoid arhtritis

    Get PDF
    Background: Foot surgery is common in patients with RA but research into surgical outcomes is limited and conceptually flawed as current outcome measures lack face validity: to date no one has asked patients what is important to them. This study aimed to determine which factors are important to patients when evaluating the success of foot surgery in RA Methods: Semi structured interviews of RA patients who had undergone foot surgery were conducted and transcribed verbatim. Thematic analysis of interviews was conducted to explore issues that were important to patients. Results: 11 RA patients (9 ♂, mean age 59, dis dur = 22yrs, mean of 3 yrs post op) with mixed experiences of foot surgery were interviewed. Patients interpreted outcome in respect to a multitude of factors, frequently positive change in one aspect contrasted with negative opinions about another. Overall, four major themes emerged. Function: Functional ability & participation in valued activities were very important to patients. Walking ability was a key concern but patients interpreted levels of activity in light of other aspects of their disease, reflecting on change in functional ability more than overall level. Positive feelings of improved mobility were often moderated by negative self perception ("I mean, I still walk like a waddling duck”). Appearance: Appearance was important to almost all patients but perhaps the most complex theme of all. Physical appearance, foot shape, and footwear were closely interlinked, yet patients saw these as distinct separate concepts. Patients need to legitimize these feelings was clear and they frequently entered into a defensive repertoire ("it's not cosmetic surgery; it's something that's more important than that, you know?”). Clinician opinion: Surgeons' post operative evaluation of the procedure was very influential. The impact of this appraisal continued to affect patients' lasting impression irrespective of how the outcome compared to their initial goals ("when he'd done it ... he said that hasn't worked as good as he'd wanted to ... but the pain has gone”). Pain: Whilst pain was important to almost all patients, it appeared to be less important than the other themes. Pain was predominately raised when it influenced other themes, such as function; many still felt the need to legitimize their foot pain in order for health professionals to take it seriously ("in the end I went to my GP because it had happened a few times and I went to an orthopaedic surgeon who was quite dismissive of it, it was like what are you complaining about”). Conclusions: Patients interpret the outcome of foot surgery using a multitude of interrelated factors, particularly functional ability, appearance and surgeons' appraisal of the procedure. While pain was often noted, this appeared less important than other factors in the overall outcome of the surgery. Future research into foot surgery should incorporate the complexity of how patients determine their outcome Disclosure statement: All authors have declared no conflicts of interes

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose: Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods: Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results: The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion: We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes
    corecore