176 research outputs found

    Collaboration with the mesh industry: who needs who?

    Get PDF

    Comparative Effectiveness Research: An Empirical Study of Trials Registered in ClinicalTrials.gov

    Get PDF
    Background The $1.1 billion investment in comparative effectiveness research will reshape the evidence-base supporting decisions about treatment effectiveness, safety, and cost. Defining the current prevalence and characteristics of comparative effectiveness (CE) research will enable future assessments of the impact of this program. Methods We conducted an observational study of clinical trials addressing priority research topics defined by the Institute of Medicine and conducted in the US between 2007 and 2010. Trials were identified in ClinicalTrials.gov. Main outcome measures were the prevalence of comparative effectiveness research, nature of comparators selected, funding sources, and impact of these factors on results. Results 231 (22.3%; 95% CI 19.8%–24.9%) studies were CE studies and 804 (77.7%; 95% CI, 75.1%–80.2%) were non-CE studies, with 379 (36.6%; 95% CI, 33.7%–39.6%) employing a placebo control and 425 (41.1%; 95% CI, 38.1%–44.1%) no control. The most common treatments examined in CE studies were drug interventions (37.2%), behavioral interventions (28.6%), and procedures (15.6%). Study findings were favorable for the experimental treatment in 34.8% of CE studies and greater than twice as many (78.6%) non-CE studies (P<0.001). CE studies were more likely to receive government funding (P = 0.003) and less likely to receive industry funding (P = 0.01), with 71.8% of CE studies primarily funded by a noncommercial source. The types of interventions studied differed based on funding source, with 95.4% of industry trials studying a drug or device. In addition, industry-funded CE studies were associated with the fewest pediatric subjects (P<0.001), the largest anticipated sample size (P<0.001), and the shortest study duration (P<0.001). Conclusions In this sample of studies examining high priority areas for CE research, less than a quarter are CE studies and the majority is supported by government and nonprofits. The low prevalence of CE research exists across CE studies with a broad array of interventions and characteristics.National Library of Medicine (U.S.) (5G08LM009778)National Institutes of Health (U.S.

    Oral cancer treatment costs in Greece and the effect of advanced disease

    Get PDF
    BACKGROUND: The main purpose of the study was to quantify the direct costs of oral cancer treatment to the healthcare system of Greece. Another aim was to identify factors that affect costs and potential cost reduction items. More specifically, we examined the relationship between stage of disease, modality of treatment and total direct costs. METHODS: The medical records and clinic files of the Oral and Maxillofacial Clinic of the Athens General Hospital "Genimatas" were abstracted to investigate clinical treatment characteristics, including length of hospitalization, modes of treatment, stage of disease etc. Records of 95 patients with oral squamous cell carcinoma (OSSC), with at least six months of follow-up, were examined. The clinical data was then used to calculate actual direct costs, based on 2001 market values. RESULTS: The mean total direct costs for OSSC treatment estimated at euro 8,450 or approximately US$ 7,450. Costs depended on the stage of the disease, with significant increases in stages III and IV, as compared with stages I and II (p < 0.05). Multi-modality treatment applied mainly to patients in stages III and IV was the factor that affected the cost. Disease stage was also associated with the total duration of hospitalization (p < 0.05). CONCLUSIONS: The clinical management of advanced oral cancer is strongly associated with higher costs. Although the ideal would be to prevent cancer, the combination of high-risk screening, early diagnosis and early treatment seems the most efficient way to reduce costs, and most importantly, prolong life

    Resource Modelling: The Missing Piece of the HTA Jigsaw?

    Get PDF
    Within health technology assessment (HTA), cost-effectiveness analysis and budget impact analyses have been broadly accepted as important components of decision making. However, whilst they address efficiency and affordability, the issue of implementation and feasibility has been largely ignored. HTA commonly takes place within a deliberative framework that captures issues of implementation and feasibility in a qualitative manner. We argue that only through a formal quantitative assessment of resource constraints can these issues be fully addressed. This paper argues the need for resource modelling to be considered explicitly in HTA. First, economic evaluation and budget impact models are described along with their limitations in evaluating feasibility. Next, resource modelling is defined and its usefulness is described along with examples of resource modelling from the literature. Then, the important issues that need to be considered when undertaking resource modelling are described before setting out recommendations for the use of resource modelling in HTA

    Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence

    Get PDF
    Objectives: Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance. Methods: We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric +, indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests. Results: All tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric+ features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall. Conclusions: A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration. © 2014 Bekhuis et al
    corecore