76 research outputs found

    Estimating offsets for avian displacement effects of anthropogenic impacts

    Get PDF
    Biodiversity offsetting, or compensatory mitigation, is increasingly being used in temperate grassland ecosystems to compensate for unavoidable environmental damage from anthropogenic developments such as transportation infrastructure, urbanization, and energy development. Pursuit of energy independence in the United States will expand domestic energy production. Concurrent with this increased growth is increased disruption to wildlife habitats, including avian displacement from suitable breeding habitat. Recent studies at energy-extraction and energy-generation facilities have provided evidence for behavioral avoidance and thus reduced use of habitat by breeding waterfowl and grassland birds in the vicinity of energy infrastructure. To quantify and compensate for this loss in value of avian breeding habitat, it is necessary to determine a biologically based currency so that the sufficiency of offsets in terms of biological equivalent value can be obtained. We describe a method for quantifying the amount of habitat needed to provide equivalent biological value for avifauna displaced by energy and transportation infrastructure, based on the ability to define five metrics: impact distance, impact area, pre-impact density, percent displacement, and offset density. We calculate percent displacement values for breeding waterfowl and grassland birds and demonstrate the applicability of our avian-impact offset method using examples for wind and oil infrastructure. We also apply our method to an example in which the biological value of the offset habitat is similar to the impacted habitat, based on similarity in habitat type (e.g., native prairie), geographical location, land use, and landscape composition, as well as to an example in which the biological value of the offset habitat is dissimilar to the impacted habitat. We provide a worksheet that informs potential users how to apply our method to their specific developments and a framework for developing decision-support tools aimed at achieving landscape-level conservation goals

    An observational study of Donor Ex Vivo Lung Perfusion in UK lung transplantation: DEVELOP-UK

    Get PDF
    Background: Many patients awaiting lung transplantation die before a donor organ becomes available. Ex vivo lung perfusion (EVLP) allows initially unusable donor lungs to be assessed and reconditioned for clinical use. Objective: The objective of the Donor Ex Vivo Lung Perfusion in UK lung transplantation study was to evaluate the clinical effectiveness and cost-effectiveness of EVLP in increasing UK lung transplant activity. Design: A multicentre, unblinded, non-randomised, non-inferiority observational study to compare transplant outcomes between EVLP-assessed and standard donor lungs. Setting: Multicentre study involving all five UK officially designated NHS adult lung transplant centres. Participants: Patients aged ≄ 18 years with advanced lung disease accepted onto the lung transplant waiting list. Intervention: The study intervention was EVLP assessment of donor lungs before determining suitability for transplantation. Main outcome measures: The primary outcome measure was survival during the first 12 months following lung transplantation. Secondary outcome measures were patient-centred outcomes that are influenced by the effectiveness of lung transplantation and that contribute to the health-care costs. Results: Lungs from 53 donors unsuitable for standard transplant were assessed with EVLP, of which 18 (34%) were subsequently transplanted. A total of 184 participants received standard donor lungs. Owing to the early closure of the study, a non-inferiority analysis was not conducted. The Kaplan–Meier estimate of survival at 12 months was 0.67 [95% confidence interval (CI) 0.40 to 0.83] for the EVLP arm and 0.80 (95% CI 0.74 to 0.85) for the standard arm. The hazard ratio for overall 12-month survival in the EVLP arm relative to the standard arm was 1.96 (95% CI 0.83 to 4.67). Patients in the EVLP arm required ventilation for a longer period and stayed longer in an intensive therapy unit (ITU) than patients in the standard arm, but duration of overall hospital stay was similar in both groups. There was a higher rate of very early grade 3 primary graft dysfunction (PGD) in the EVLP arm, but rates of PGD did not differ between groups after 72 hours. The requirement for extracorporeal membrane oxygenation (ECMO) support was higher in the EVLP arm (7/18, 38.8%) than in the standard arm (6/184, 3.2%). There were no major differences in rates of chest radiograph abnormalities, infection, lung function or rejection by 12 months. The cost of EVLP transplants is approximately ÂŁ35,000 higher than the cost of standard transplants, as a result of the cost of the EVLP procedure, and the increased ECMO use and ITU stay. Predictors of cost were quality of life on joining the waiting list, type of transplant and number of lungs transplanted. An exploratory model comparing a NHS lung transplant service that includes EVLP and standard lung transplants with one including only standard lung transplants resulted in an incremental cost-effectiveness ratio of ÂŁ73,000. Interviews showed that patients had a good understanding of the need for, and the processes of, EVLP. If EVLP can increase the number of usable donor lungs and reduce waiting, it is likely to be acceptable to those waiting for lung transplantation. Study limitations include small numbers in the EVLP arm, limiting analysis to descriptive statistics and the EVLP protocol change during the study. Conclusions: Overall, one-third of donor lungs subjected to EVLP were deemed suitable for transplant. Estimated survival over 12 months was lower than in the standard group, but the data were also consistent with no difference in survival between groups. Patients receiving these additional transplants experience a higher rate of early graft injury and need for unplanned ECMO support, at increased cost. The small number of participants in the EVLP arm because of early study termination limits the robustness of these conclusions. The reason for the increased PGD rates, high ECMO requirement and possible differences in lung injury between EVLP protocols needs evaluation

    Reflections on the ethics of recruiting foreign-trained human resources for health

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Developed countries' gains in health human resources (HHR) from developing countries with significantly lower ratios of health workers have raised questions about the ethics or fairness of recruitment from such countries. By attracting and/or facilitating migration for foreign-trained HHR, notably those from poorer, less well-resourced nations, recruitment practices and policies may be compromising the ability of developing countries to meet the health care needs of their own populations. Little is known, however, about actual recruitment practices. In this study we focus on Canada (a country with a long reliance on internationally trained HHR) and recruiters working for Canadian health authorities.</p> <p>Methods</p> <p>We conducted interviews with health human resources recruiters employed by Canadian health authorities to describe their recruitment practices and perspectives and to determine whether and how they reflect ethical considerations.</p> <p>Results and discussion</p> <p>We describe the methods that recruiters used to recruit foreign-trained health professionals and the systemic challenges and policies that form the working context for recruiters and recruits. HHR recruiters' reflections on the global flow of health workers from poorer to richer countries mirror much of the content of global-level discourse with regard to HHR recruitment. A predominant market discourse related to shortages of HHR outweighed discussions of human rights and ethical approaches to recruitment policy and action that consider global health impacts.</p> <p>Conclusions</p> <p>We suggest that the concept of corporate social responsibility may provide a useful approach at the local organizational level for developing policies on ethical recruitment. Such local policies and subsequent practices may inform public debate on the health equity implications of the HHR flows from poorer to richer countries inherent in the global health worker labour market, which in turn could influence political choices at all government and health system levels.</p

    The Effect of Agronomic Factors on the Yield of Winter Wheat in Crop Rotation with Livestock Production

    Get PDF
    The aim of the study was to evaluate the influence not only of the year, but also of the three agronomic factors, namely pre-crops, soil tillage, and application of fungicides on the subsequent grain yield of winter wheat. The field trial was carried out at the Field Trial Station in Ćœabčice (South Moravia, Czech Republic), between 2014 and 2016, as part of a long-term field experiment focused on management of soil with livestock production. Winter wheat was grown after two pre-crops, namely alfalfa and silage maize. The soil was treated using three technologies, namely conventional tillage (CT) – ploughing to a depth of 0.24 m, minimum tillage (MT) – shallow loosening to a depth of 0.15 m, and no-tillage (NT) – direct sowing. In terms of fungicide treatment, two treatments were used and compared to a non-treatment variant. The obtained results suggest that the statistical significance was not found in the influence of the pre-crop. On the contrary, the influence of not only the year but also of the soil tillage technology and fungicide treatment was confirmed. Higher yields by 0.59 t/ha were achieved after shallow loosening and direct sowing as compared with after traditional ploughing and after application of fungicides. In addition, inconclusive influence of interaction between pre-crop and soil tillage as well as between soil tillage and fungicide treatment was also found

    An evidence-based approach to the use of telehealth in long-term health conditions: development of an intervention and evaluation through pragmatic randomised controlled trials in patients with depression or raised cardiovascular risk

    Get PDF
    Background: Health services internationally are exploring the potential of telehealth to support the management of the growing number of people with long-term conditions (LTCs). Aim: To develop, implement and evaluate new care programmes for patients with LTCs, focusing on two common LTCs as exemplars: depression or high cardiovascular disease (CVD) risk. Methods Development: We synthesised quantitative and qualitative evidence on the effectiveness of telehealth for LTCs, conducted a qualitative study based on interviews with patients and staff and undertook a postal survey to explore which patients are interested in different forms of telehealth. Based on these studies we developed a conceptual model [TElehealth in CHronic disease (TECH) model] as a framework for the development and evaluation of the Healthlines Service for patients with LTCs. Implementation: The Healthlines Service consisted of regular telephone calls to participants from health information advisors, supporting them to make behaviour change and to use tailored online resources. Advisors sought to optimise participants’ medication and to improve adherence. Evaluation: The Healthlines Service was evaluated with linked pragmatic randomised controlled trials comparing the Healthlines Service plus usual care with usual care alone, with nested process and economic evaluations. Participants were adults with depression or raised CVD risk recruited from 43 general practices in three areas of England. The primary outcome was response to treatment and the secondary outcomes included anxiety (depression trial), individual risk factors (CVD risk trial), self-management skills, medication adherence, perceptions of support, access to health care and satisfaction with treatment. Trial results Depression trial: In total, 609 participants were randomised and the retention rate was 86%. Response to treatment [Patient Health Questionnaire 9-items (PHQ-9) reduction of ≄ 5 points and score of < 10 after 4 months] was higher in the intervention group (27%, 68/255) than in the control group (19%, 50/270) [odds ratio 1.7, 95% confidence interval (CI) 1.1 to 2.5; p = 0.02]. Anxiety also improved. Intervention participants reported better access to health support, greater satisfaction with treatment and small improvements in self-management, but not improved medication adherence. CVD risk trial: In total, 641 participants were randomised and the retention rate was 91%. Response to treatment (maintenance of/reduction in QRISKÂź2 score after 12 months) was higher in the intervention group (50%, 148/295) than in the control group (43%, 124/291), which does not exclude a null effect (odds ratio 1.3, 95% CI 1.0 to 1.9; p = 0.08). The intervention was associated with small improvements in blood pressure and weight, but not smoking or cholesterol. Intervention participants were more likely to adhere to medication, reported better access to health support and greater satisfaction with treatment, but few improvements in self-management. The Healthlines Service was likely to be cost-effective for CVD risk, particularly if the benefits are sustained, but not for depression. The intervention was implemented largely as planned, although initial delays and later disruption to delivery because of the closure of NHS Direct may have adversely affected participant engagement. Conclusion: The Healthlines Service, designed using an evidence-based conceptual model, provided modest health benefits and participants valued the better access to care and extra support provided. This service was cost-effective for CVD risk but not depression. These findings of small benefits at extra cost are consistent with previous pragmatic research on the implementation of comprehensive telehealth programmes for LTCs

    Evidence for models of diagnostic service provision in the community: literature mapping exercise and focused rapid reviews

    Get PDF
    Background Current NHS policy favours the expansion of diagnostic testing services in community and primary care settings. Objectives Our objectives were to identify current models of community diagnostic services in the UK and internationally and to assess the evidence for quality, safety and clinical effectiveness of such services. We were also interested in whether or not there is any evidence to support a broader range of diagnostic tests being provided in the community. Review methods We performed an initial broad literature mapping exercise to assess the quantity and nature of the published research evidence. The results were used to inform selection of three areas for investigation in more detail. We chose to perform focused reviews on logistics of diagnostic modalities in primary care (because the relevant issues differ widely between different types of test); diagnostic ultrasound (a key diagnostic technology affected by developments in equipment); and a diagnostic pathway (assessment of breathlessness) typically delivered wholly or partly in primary care/community settings. Databases and other sources searched, and search dates, were decided individually for each review. Quantitative and qualitative systematic reviews and primary studies of any design were eligible for inclusion. Results We identified seven main models of service that are delivered in primary care/community settings and in most cases with the possible involvement of community/primary care staff. Not all of these models are relevant to all types of diagnostic test. Overall, the evidence base for community- and primary care-based diagnostic services was limited, with very few controlled studies comparing different models of service. We found evidence from different settings that these services can reduce referrals to secondary care and allow more patients to be managed in primary care, but the quality of the research was generally poor. Evidence on the quality (including diagnostic accuracy and appropriateness of test ordering) and safety of such services was mixed. Conclusions In the absence of clear evidence of superior clinical effectiveness and cost-effectiveness, the expansion of community-based services appears to be driven by other factors. These include policies to encourage moving services out of hospitals; the promise of reduced waiting times for diagnosis; the availability of a wider range of suitable tests and/or cheaper, more user-friendly equipment; and the ability of commercial providers to bid for NHS contracts. However, service development also faces a number of barriers, including issues related to staffing, training, governance and quality control. Limitations We have not attempted to cover all types of diagnostic technology in equal depth. Time and staff resources constrained our ability to carry out review processes in duplicate. Research in this field is limited by the difficulty of obtaining, from publicly available sources, up-to-date information about what models of service are commissioned, where and from which providers. Future work There is a need for research to compare the outcomes of different service models using robust study designs. Comparisons of ‘true’ community-based services with secondary care-based open-access services and rapid access clinics would be particularly valuable. There are specific needs for economic evaluations and for studies that incorporate effects on the wider health system. There appears to be no easy way of identifying what services are being commissioned from whom and keeping up with local evaluations of new services, suggesting a need to improve the availability of information in this area. Funding The National Institute for Health Research Health Services and Delivery Research programme

    Accurate diagnosis of latent tuberculosis in children, people who are immunocompromised or at risk from immunosuppression and recent arrivals from countries with a high incidence of tuberculosis: systematic review and economic evaluation

    Full text link
    • 

    corecore