629 research outputs found

    Using administrative data to look at changes in the level and distribution of out-of-pocket medical expenditure: An example using Medicare data from Australia.

    Full text link
    OBJECTIVES: Australia's universal health insurance system Medicare generates very large amounts of data on out-of-pocket expenditure (OOPE), but only highly aggregated statistics are routinely published. Our primary purpose is to develop indices from the Medicare administrative data to quantify changes in the level and distribution of OOPE on out-of-hospital medical services over time. METHODS: Data were obtained from the Australian Hypertension and Absolute Risk Study, which involved patients aged 55 years and over (n=2653). Socio-economic and clinical information was collected and linked to Medicare records over a five-year period from March 2008. The Fisher price and quantity indices were used to evaluate year-to-year changes in OOPE. The relative concentration index was used to evaluate the distribution of OOPE across socio-economic strata. RESULTS: Our price index indicates that overall OOPE were not rising faster than inflation, but there was considerable variation across different types of services (e.g. OOPE on professional attendances rose by 20% over a five-year period, while all other items fell by around 14%). Concentration indices, adjusted for demographic factors and clinical need, indicate that OOPE tends to be higher among those on higher incomes. CONCLUSIONS: A major challenge in utilizing large administrative data sets is to develop reliable and easily interpretable statistics for policy makers. Price, quantity and concentration indices represent statistics that move us beyond the average

    Event Rates, Hospital Utilization, and Costs Associated with Major Complications of Diabetes: A Multicountry Comparative Analysis

    Get PDF
    Philip Clarke and colleagues examined patient-level data for over 11,000 participants with type 2 diabetes from 20 countries and find that major complications of diabetes significantly increased hospital use and costs across settings

    The James Lind Initiative: books, websites and databases to promote critical thinking about treatment claims, 2003 to 2018

    Get PDF
    Abstract Background The James Lind Initiative (JLI) was a work programme inaugurated by Iain Chalmers and Patricia Atkinson to press for better research for better health care. It ran between 2003 and 2018, when Iain Chalmers retired. During the 15 years of its existence, the JLI developed three strands of work in collaboration with the authors of this paper, and with others. Work themes The first work strand involved developing a process for use by patients, carers and clinicians to identify shared priorities for research – the James Lind Alliance. The second strand was a series of articles, meetings, prizes and other developments to raise awareness of the massive amounts of avoidable waste in research, and of ways of reducing it. The third strand involved using a variety of approaches to promote better public and professional understanding of the importance of research in clinical practice and public health. JLI work on the first two themes has been addressed in previously published reports. This paper summarises JLI involvement during the 15 years of its existence in giving talks, convening workshops, writing books, and creating websites and databases to promote critical thinking about treatment claims. Conclusion During its 15-year life, the James Lind Initiative worked collaboratively with others to create free teaching and learning resources to help children and adults learn how to recognise untrustworthy claims about the effects of treatments. These resources have been translated in more than twenty languages, but much more could be done to support their uptake and wider use

    Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis

    Get PDF
    Ida Sim and colleagues investigate the publication status and publication bias of trials submitted to the US Food and Drug Administration (FDA) for a wide variety of approved drugs

    How do we create, and improve, the evidence base? 

    Get PDF
    Providing best clinical care involves using the best available evidence of effectiveness to inform treatment decisions. Producing this evidence begins with trials and continues through synthesis of their findings towards evidence incorporation within comprehensible, usable guidelines, for clinicians and patients at the point of care. However, there is enormous wastage in this evidence production process, with less than 50% of the published biomedical literature considered sufficient in conduct and reporting to be fit for purpose. Over the last 30 years, independent collaborative initiatives have evolved to optimise the evidence to improve patient care. These collaborations each recommend how to improve research quality in a small way at many different stages of the evidence production and distillation process. When we consider these minimal improvements at each stage from an 'aggregation of marginal gains' perspective, the accumulation of small enhancements aggregates, thereby greatly improving the final product of 'best available evidence'. The myriad of tools to reduce research quality leakage and evidence loss should be routinely used by all those with responsibility for ensuring that research benefits patients, that is, those who pay for research (funders), produce it (researchers), take part in it (patients/participants) and use it (clinicians, policy makers and service commissioners)

    The citation of relevant systematic reviews and randomised trials in published reports of trial protocols

    Get PDF
    Background It is important that planned randomised trials are justified and placed in the context of the available evidence. The SPIRIT guidelines for reporting clinical trial protocols recommend that a recent and relevant systematic review should be included. The aim of this study was to assess the use of the existing evidence in order to justify trial conduct. Methods Protocols of randomised trials published over a 1-month period (December 2015) indexed in PubMed were obtained. Data on trial characteristics relating to location, design, funding, conflict of interest and type of evidence included for trial justification was extracted in duplicate and independently by two investigators. The frequency of citation of previous research including relevant systematic reviews and randomised trials was assessed. Results Overall, 101 protocols for RCTs were identified. Most proposed trials were parallel-group (n = 74; 73.3%). Reference to an earlier systematic review with additional randomised trials was found in 9.9% (n = 10) of protocols and without additional trials in 30.7% (n = 31), while reference was made to randomised trials in isolation in 21.8% (n = 22). Explicit justification for the proposed randomised trial on the basis of being the first to address the research question was made in 17.8% (n = 18) of protocols. A randomised controlled trial was not cited in 10.9% (95% CI: 5.6, 18.7) (n = 11), while in 8.9% (95% CI: 4.2, 16.2) (n = 9) of the protocols a systematic review was cited but did not inform trial design. Conclusions A relatively high percentage of protocols of randomised trials involves prior citation of randomised trials, systematic reviews or both. However, improvements are required to ensure that it is explicit that clinical trials are justified and shaped by contemporary best evidence

    Methodology in conducting a systematic review of systematic reviews of healthcare interventions

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Hundreds of studies of maternity care interventions have been published, too many for most people involved in providing maternity care to identify and consider when making decisions. It became apparent that systematic reviews of individual studies were required to appraise, summarise and bring together existing studies in a single place. However, decision makers are increasingly faced by a plethora of such reviews and these are likely to be of variable quality and scope, with more than one review of important topics. Systematic reviews (or overviews) of reviews are a logical and appropriate next step, allowing the findings of separate reviews to be compared and contrasted, providing clinical decision makers with the evidence they need.</p> <p>Methods</p> <p>The methods used to identify and appraise published and unpublished reviews systematically, drawing on our experiences and good practice in the conduct and reporting of systematic reviews are described. The process of identifying and appraising all published reviews allows researchers to describe the quality of this evidence base, summarise and compare the review's conclusions and discuss the strength of these conclusions.</p> <p>Results</p> <p>Methodological challenges and possible solutions are described within the context of (i) sources, (ii) study selection, (iii) quality assessment (i.e. the extent of searching undertaken for the reviews, description of study selection and inclusion criteria, comparability of included studies, assessment of publication bias and assessment of heterogeneity), (iv) presentation of results, and (v) implications for practice and research.</p> <p>Conclusion</p> <p>Conducting a systematic review of reviews highlights the usefulness of bringing together a summary of reviews in one place, where there is more than one review on an important topic. The methods described here should help clinicians to review and appraise published reviews systematically, and aid evidence-based clinical decision-making.</p
    corecore