363 research outputs found

    Why is it difficult to implement e-health initiatives? A qualitative study

    Get PDF
    <b>Background</b> The use of information and communication technologies in healthcare is seen as essential for high quality and cost-effective healthcare. However, implementation of e-health initiatives has often been problematic, with many failing to demonstrate predicted benefits. This study aimed to explore and understand the experiences of implementers - the senior managers and other staff charged with implementing e-health initiatives and their assessment of factors which promote or inhibit the successful implementation, embedding, and integration of e-health initiatives.<p></p> <b>Methods</b> We used a case study methodology, using semi-structured interviews with implementers for data collection. Case studies were selected to provide a range of healthcare contexts (primary, secondary, community care), e-health initiatives, and degrees of normalization. The initiatives studied were Picture Archiving and Communication System (PACS) in secondary care, a Community Nurse Information System (CNIS) in community care, and Choose and Book (C&B) across the primary-secondary care interface. Implementers were selected to provide a range of seniority, including chief executive officers, middle managers, and staff with 'on the ground' experience. Interview data were analyzed using a framework derived from Normalization Process Theory (NPT).<p></p> <b>Results</b> Twenty-three interviews were completed across the three case studies. There were wide differences in experiences of implementation and embedding across these case studies; these differences were well explained by collective action components of NPT. New technology was most likely to 'normalize' where implementers perceived that it had a positive impact on interactions between professionals and patients and between different professional groups, and fit well with the organisational goals and skill sets of existing staff. However, where implementers perceived problems in one or more of these areas, they also perceived a lower level of normalization.<p></p> <b>Conclusions</b> Implementers had rich understandings of barriers and facilitators to successful implementation of e-health initiatives, and their views should continue to be sought in future research. NPT can be used to explain observed variations in implementation processes, and may be useful in drawing planners' attention to potential problems with a view to addressing them during implementation planning

    Improving the normalization of complex interventions: measure development based on normalization process theory (NoMAD): study protocol

    Get PDF
    <b>Background</b> Understanding implementation processes is key to ensuring that complex interventions in healthcare are taken up in practice and thus maximize intended benefits for service provision and (ultimately) care to patients. Normalization Process Theory (NPT) provides a framework for understanding how a new intervention becomes part of normal practice. This study aims to develop and validate simple generic tools derived from NPT, to be used to improve the implementation of complex healthcare interventions.<p></p> <b>Objectives</b> The objectives of this study are to: develop a set of NPT-based measures and formatively evaluate their use for identifying implementation problems and monitoring progress; conduct preliminary evaluation of these measures across a range of interventions and contexts, and identify factors that affect this process; explore the utility of these measures for predicting outcomes; and develop an online users’ manual for the measures.<p></p> <b>Methods</b> A combination of qualitative (workshops, item development, user feedback, cognitive interviews) and quantitative (survey) methods will be used to develop NPT measures, and test the utility of the measures in six healthcare intervention settings.<p></p> <b>Discussion</b> The measures developed in the study will be available for use by those involved in planning, implementing, and evaluating complex interventions in healthcare and have the potential to enhance the chances of their implementation, leading to sustained changes in working practices

    Dietary supplement use among health care professionals enrolled in an online curriculum on herbs and dietary supplements

    Get PDF
    BACKGROUND: Although many health care professionals (HCPs) in the United States have been educated about and recommend dietary supplements, little is known about their personal use of dietary supplements and factors associated with their use. METHODS: We surveyed HCPs at the point of their enrollment in an on-line course about dietary supplements between September, 2004 and May, 2005. We used multivariable logistic regression to analyze demographic and practice factors associated with use of dietary supplements. RESULTS: Of the 1249 health care professionals surveyed, 81 % reported having used a vitamin, mineral, or other non-herbal dietary supplements in the last week. Use varied by profession with highest rates among nurses (88%), physician assistants or nurse practitioners (84 %) and the lowest rates among pharmacists (66%) and trainees (72%). The most frequently used supplements were multivitamins (60%), calcium (40%), vitamin B (31%), vitamin C (30%), and fish oil (24%). Factors associated with higher supplement use were older age, female, high knowledge of dietary supplements, and discussing dietary supplements with patients. In our adjusted model, nurses were more likely than other professionals to use a multivitamin and students were more likely to use calcium. CONCLUSION: Among HCPs enrolled in an on-line course about dietary supplements, women, older clinicians, those with higher knowledge and those who talk with patients about dietary supplements had higher use of dietary supplements. Additional research is necessary to understand the impact of professionals' personal use of dietary supplements on communication with patients about them

    Natural Plant Sugar Sources of Anopheles Mosquitoes Strongly Impact Malaria Transmission Potential

    Get PDF
    An improved knowledge of mosquito life history could strengthen malaria vector control efforts that primarily focus on killing mosquitoes indoors using insecticide treated nets and indoor residual spraying. Natural sugar sources, usually floral nectars of plants, are a primary energy resource for adult mosquitoes but their role in regulating the dynamics of mosquito populations is unclear. To determine how the sugar availability impacts Anopheles sergentii populations, mark-release-recapture studies were conducted in two oases in Israel with either absence or presence of the local primary sugar source, flowering Acacia raddiana trees. Compared with population estimates from the sugar-rich oasis, An. sergentii in the sugar-poor oasis showed smaller population size (37,494 vs. 85,595), lower survival rates (0.72 vs. 0.93), and prolonged gonotrophic cycles (3.33 vs. 2.36 days). The estimated number of females older than the extrinsic incubation period of malaria (10 days) in the sugar rich site was 4 times greater than in the sugar poor site. Sugar feeding detected in mosquito guts in the sugar-rich site was significantly higher (73%) than in the sugar-poor site (48%). In contrast, plant tissue feeding (poor quality sugar source) in the sugar-rich habitat was much less (0.3%) than in the sugar-poor site (30%). More important, the estimated vectorial capacity, a standard measure of malaria transmission potential, was more than 250-fold higher in the sugar-rich oasis than that in the sugar-poor site. Our results convincingly show that the availability of sugar sources in the local environment is a major determinant regulating the dynamics of mosquito populations and their vector potential, suggesting that control interventions targeting sugar-feeding mosquitoes pose a promising tactic for combating transmission of malaria parasites and other pathogens

    An Approach to Enhance the Conservation-Compatibility of Solar Energy Development

    Get PDF
    The rapid pace of climate change poses a major threat to biodiversity. Utility-scale renewable energy development (>1 MW capacity) is a key strategy to reduce greenhouse gas emissions, but development of those facilities also can have adverse effects on biodiversity. Here, we examine the synergy between renewable energy generation goals and those for biodiversity conservation in the 13 M ha Mojave Desert of the southwestern USA. We integrated spatial data on biodiversity conservation value, solar energy potential, and land surface slope angle (a key determinant of development feasibility) and found there to be sufficient area to meet renewable energy goals without developing on lands of relatively high conservation value. Indeed, we found nearly 200,000 ha of lower conservation value land below the most restrictive slope angle (<1%); that area could meet the state of California’s current 33% renewable energy goal 1.8 times over. We found over 740,000 ha below the highest slope angle (<5%) – an area that can meet California’s renewable energy goal seven times over. Our analysis also suggests that the supply of high quality habitat on private land may be insufficient to mitigate impacts from future solar projects, so enhancing public land management may need to be considered among the options to offset such impacts. Using the approach presented here, planners could reduce development impacts on areas of higher conservation value, and so reduce trade-offs between converting to a green energy economy and conserving biodiversity

    Perceived difficulty and appropriateness of decision making by General Practitioners: a systematic review of scenario studies

    Get PDF
    Background: Health-care quality in primary care depends largely on the appropriateness of General Practitioners’ (GPs; Primary Care or Family Physicians) decisions, which may be influenced by how difficult they perceive decisions to be. Patient scenarios (clinical or case vignettes) are widely used to investigate GPs’ decision making. This review aimed to identify the extent to which perceived decision difficulty, decision appropriateness, and their relationship have been assessed in scenario studies of GPs’ decision making; identify possible determinants of difficulty and appropriateness; and investigate the relationship between difficulty and appropriateness. Methods: MEDLINE, EMBASE, PsycINFO, the Cochrane Library and Web of Science were searched for scenario studies of GPs’ decision making. One author completed article screening. Ten percent of titles and abstracts were checked by an independent volunteer, resulting in 91% agreement. Data on decision difficulty and appropriateness were extracted by one author and descriptively synthesised. Chi-squared tests were used to explore associations between decision appropriateness, decision type and decision appropriateness assessment method. Results: Of 152 included studies, 66 assessed decision appropriateness and five assessed perceived difficulty. While no studies assessed the relationship between perceived difficulty and appropriateness, one study objectively varied the difficulty of the scenarios and assessed the relationship between a measure of objective difficulty and appropriateness. Across 38 studies where calculations were possible, 62% of the decisions were appropriate as defined by the appropriateness standard used. Chi-squared tests identified statistically significant associations between decision appropriateness, decision type and decision appropriateness assessment method. Findings suggested a negative relationship between decision difficulty and appropriateness, while interventions may have the potential to reduce perceived difficulty. Conclusions: Scenario-based research into GPs’ decisions rarely considers the relationship between perceived decision difficulty and decision appropriateness. The links between these decisional components require further investigation
    corecore