31 research outputs found

    How do we create, and improve, the evidence base? 

    Get PDF
    Providing best clinical care involves using the best available evidence of effectiveness to inform treatment decisions. Producing this evidence begins with trials and continues through synthesis of their findings towards evidence incorporation within comprehensible, usable guidelines, for clinicians and patients at the point of care. However, there is enormous wastage in this evidence production process, with less than 50% of the published biomedical literature considered sufficient in conduct and reporting to be fit for purpose. Over the last 30 years, independent collaborative initiatives have evolved to optimise the evidence to improve patient care. These collaborations each recommend how to improve research quality in a small way at many different stages of the evidence production and distillation process. When we consider these minimal improvements at each stage from an 'aggregation of marginal gains' perspective, the accumulation of small enhancements aggregates, thereby greatly improving the final product of 'best available evidence'. The myriad of tools to reduce research quality leakage and evidence loss should be routinely used by all those with responsibility for ensuring that research benefits patients, that is, those who pay for research (funders), produce it (researchers), take part in it (patients/participants) and use it (clinicians, policy makers and service commissioners)

    Quasi-experimental study designs series—paper 4: uses and value

    Get PDF
    Quasi-experimental studies are increasingly used to establish causal relationships in epidemiology and health systems research. Quasi-experimental studies offer important opportunities to increase and improve evidence on causal effects: (i) they can generate causal evidence when randomized controlled trials are impossible; (ii) they typically generate causal evidence with a high degree of external validity; (iii) they avoid the threats to internal validity that arise when participants in non-blinded experiments change their behavior in response to the experimental assignment to either intervention or control arm (such as compensatory rivalry or resentful demoralization); (iv) they are often well-suited to generate causal evidence on long-term health outcomes of an intervention, as well as non-health outcomes such as economic and social consequences; and (v) they can often generate evidence faster and at lower cost than experiments and other intervention studies

    Immunogenicity of Fractional Doses of Tetravalent A/C/Y/W135 Meningococcal Polysaccharide Vaccine: Results from a Randomized Non-Inferiority Controlled Trial in Uganda

    Get PDF
    Meningitis are infections of the lining of the brain and spinal cord and can cause high fever, blood poisoning, and brain damage, as well as result in death in up to 10% of cases. Epidemics of meningitis occur almost every year in parts of sub-Saharan Africa, throughout a high-burden area spanning Senegal to Ethiopia dubbed the “Meningitis Belt.” Most epidemics in Africa are caused by Neisseria meningitidis (mostly serogroup A and W135). Mass vaccination campaigns attempt to control epidemics by administering meningococcal vaccines targeted against these serogroups, among others. However, global shortages of these vaccines are currently seen. We studied the use of fractional (1/5 and 1/10) doses of a licensed vaccine to assess its non-inferiority compared with the normal full dose. In a randomized trial in Uganda, we found that immune response and safety using a 1/5 dose were comparable to full dose for three serogroups (A, Y, W135), though not a fourth (C). In light of current shortages of meningococcal vaccines and their importance in fighting meningitis epidemics around the world, we suggest fractional doses be taken under consideration in mass vaccination campaigns

    Common characteristics of open source software development and applicability for drug discovery: a systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery.</p> <p>Methods</p> <p>A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project.</p> <p>Results</p> <p>Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined.</p> <p>Conclusions</p> <p>We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents.</p

    A roadmap for sustainably governing the global antimicrobial commons

    No full text
    Antimicrobials are needed to treat deadly infections, enable life-saving medical procedures, and manage disease in food production. But antimicrobials come with a trade-off: their use accelerates antimicrobial resistance (AMR), which diminishes the future effectiveness of these medicines. This trade-off makes “antimicrobial effectiveness” a precious global common-pool resource that must be collectively protected. 1 Yet antimicrobials have been used inappropriately for decades. In too many circumstances, antimicrobials are deployed to compensate for inadequate infection prevention and control (IPC) in both human health and food production, instead of implementing water, sanitation, and hygiene (WASH) and IPC measures such as preventing hospital overcrowding and ensuring good equipment sterilisation practices. 2 In the process, this precious resource has been jeopardised. 3 ,
    corecore