16 research outputs found

    Qualitative study exploring the phenomenon of multiple electronic prescribing systems within single hospital organisations

    Get PDF
    BACKGROUND: A previous census of electronic prescribing (EP) systems in England showed that more than half of hospitals with EP reported more than one EP system within the same hospital. Our objectives were to describe the rationale for having multiple EP systems within a single hospital, and to explore perceptions of stakeholders about the advantages and disadvantages of multiple systems including any impact on patient safety. METHODS: Hospitals were selected from previous census respondents. A decision matrix was developed to achieve a maximum variation sample, and snowball sampling used to recruit stakeholders of different professional backgrounds. We then used an a priori framework to guide and analyse semi-structured interviews. RESULTS: Ten participants, comprising pharmacists and doctors and a nurse, were interviewed from four hospitals. The findings suggest that use of multiple EP systems was not strategically planned. Three co-existing models of EP systems adoption in hospitals were identified: organisation-led, clinician-led and clinical network-led, which may have contributed to multiple systems use. Although there were some perceived benefits of multiple EP systems, particularly in niche specialities, many disadvantages were described. These included issues related to access, staff training, workflow, work duplication, and system interfacing. Fragmentation of documentation of the patient's journey was a major safety concern. DISCUSSION: The complexity of EP systems' adoption and deficiencies in IT strategic planning may have contributed to multiple EP systems use in the NHS. In the near to mid-term, multiple EP systems may remain in place in many English hospitals, which may create challenges to quality and patient safety.Peer reviewe

    Congenital Plasmodium falciparum infection in neonates in Muheza District, Tanzania

    Get PDF
    BACKGROUND\ud \ud Although recent reports on congenital malaria suggest that the incidence is increasing, it is difficult to determine whether the clinical disease is due to parasites acquired before delivery or as a result of contamination by maternal blood at birth. Understanding of the method of parasite acquisition is important for estimating the time incidence of congenital malaria and design of preventive measures. The aim of this study was to determine whether the first Plasmodium falciparum malaria disease in infants is due to same parasites present on the placenta at birth.\ud \ud METHODS\ud \ud Babies born to mothers with P. falciparum parasites on the placenta detected by PCR were followed up to two years and observed for malaria episodes. Paired placental and infant peripheral blood samples at first malaria episode within first three months of life were genotyped (msp2) to determine genetic relatedness. Selected amplifications from nested PCR were sequenced and compared between pairs.\ud \ud RESULTS\ud \ud Eighteen (19.1%) out of 95 infants who were followed up developed clinical malaria within the first three months of age. Eight pairs (60%) out of 14 pairs of sequenced placental and cord samples were genetically related while six (40%) were genetically unrelated. One pair (14.3%) out of seven pairs of sequenced placental and infants samples were genetically related. In addition, infants born from primigravidae mothers were more likely to be infected with P. falciparum (P < 0.001) as compared to infants from secundigravidae and multigravidae mothers during the two years of follow up. Infants from multigravidae mothers got the first P. falciparum infection earlier than those from secundigravidae and primigravidae mothers (RR = 1.43).\ud \ud CONCLUSION\ud \ud Plasmodium falciparum malaria parasites present on the placenta as detected by PCR are more likely to result in clinical disease (congenital malaria) in the infant during the first three months of life. However, sequencing data seem to question the validity of this likelihood. Therefore, the relationship between placental parasites and first clinical disease need to be confirmed in larger studies

    Implications of land use change on the national terrestrial carbon budget of Georgia

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Globally, the loss of forests now contributes almost 20% of carbon dioxide emissions to the atmosphere. There is an immediate need to reduce the current rates of forest loss, and the associated release of carbon dioxide, but for many areas of the world these rates are largely unknown. The Soviet Union contained a substantial part of the world's forests and the fate of those forests and their effect on carbon dynamics remain unknown for many areas of the former Eastern Bloc. For Georgia, the political and economic transitions following independence in 1991 have been dramatic. In this paper we quantify rates of land use changes and their effect on the terrestrial carbon budget for Georgia. A carbon book-keeping model traces changes in carbon stocks using historical and current rates of land use change. Landsat satellite images acquired circa 1990 and 2000 were analyzed to detect changes in forest cover since 1990.</p> <p>Results</p> <p>The remote sensing analysis showed that a modest forest loss occurred, with approximately 0.8% of the forest cover having disappeared after 1990. Nevertheless, growth of Georgian forests still contribute a current national sink of about 0.3 Tg of carbon per year, which corresponds to 31% of the country anthropogenic carbon emissions.</p> <p>Conclusions</p> <p>We assume that the observed forest loss is mainly a result of illegal logging, but we have not found any evidence of large-scale clear-cutting. Instead local harvesting of timber for household use is likely to be the underlying driver of the observed logging. The Georgian forests are a currently a carbon sink and will remain as such until about 2040 if the current rate of deforestation persists. Forest protection efforts, combined with economic growth, are essential for reducing the rate of deforestation and protecting the carbon sink provided by Georgian forests.</p

    Use of Zebrafish to Probe the Divergent Virulence Potentials and Toxin Requirements of Extraintestinal Pathogenic Escherichia coli

    Get PDF
    Extraintestinal pathogenic E. coli (ExPEC) cause an array of diseases, including sepsis, neonatal meningitis, and urinary tract infections. Many putative virulence factors that might modulate ExPEC pathogenesis have been identified through sequencing efforts, epidemiology, and gene expression profiling, but few of these genes have been assigned clearly defined functional roles during infection. Using zebrafish embryos as surrogate hosts, we have developed a model system with the ability to resolve diverse virulence phenotypes and niche-specific restrictions among closely related ExPEC isolates during either localized or systemic infections. In side-by-side comparisons of prototypic ExPEC isolates, we observed an unexpectedly high degree of phenotypic diversity that is not readily apparent using more traditional animal hosts. In particular, the capacity of different ExPEC isolates to persist and multiply within the zebrafish host and cause disease was shown to be variably dependent upon two secreted toxins, α-hemolysin and cytotoxic necrotizing factor. Both of these toxins appear to function primarily in the neutralization of phagocytes, which are recruited in high numbers to sites of infection where they act as an essential host defense against ExPEC as well as less virulent E. coli strains. These results establish zebrafish as a valuable tool for the elucidation and functional analysis of both ExPEC virulence factors and host defense mechanisms

    Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process.</p> <p>Discussion</p> <p>We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence.</p> <p>Summary</p> <p>In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved.</p
    corecore