2,112 research outputs found

    Reforming the cancer drug fund focus on drugs that might be shown to be cost effective

    Get PDF
    The Cancer Drug Fund was originally conceived as a temporary measure, until value based pricing for drugs was introduced, to give NHS cancer patients access to drugs not approved by NICE. Spending on these drugs rose from less than the £50m (€63m; $79m) budgeted for the first year in 2010-11 to well over £200m in 2013-14, and the budget for the scheme—now extended for a further two years—will reach £280m by 2016.1 The recent changes to the fund recognise the impossibility, within any sensible budget limit, of providing all the new cancer drugs that offer possible benefit to patients. More radical changes are needed to the working of the fund, given the failure to introduce value based pricing, so that it deals with the underlying problem of inadequate information on the effectiveness and cost effectiveness of new cancer drugs when used in the NHS

    A Proton Magnetic Resonance Study of the Association of Lysozyme with Monosaccharide Inhibitors

    Get PDF
    It has been shown that the acetamido methyl protons of N-acetyl-d-glucosamine undergo a chemical shift to higher fields in their proton magnetic resonance spectrum when the inhibitor is bound to lysozyme. The observed chemical shift in the presence of the enzyme is different for the agr- and ß-anomeric forms of 2-acetamido-2-deoxy-d-glucopyranose indicating either a difference in the affinity of the anomeric forms for lysozyme or different magnetic environments for the methyl protons in their enzyme-bound state. That the agr- and ß-anomeric forms of GlcAc bind to lysozyme in a competitive fashion was indicated by observing the proton magnetic resonance spectra in the presence of 2-acetamido-d3-2-deoxy-agr-d-glucopyranose. The methyl glycosides, methyl-agr-GlcAc and methyl-ß-GlcAc, were also shown to bind competitively with both anomers of GlcAc. Quantitative analysis of the chemical shift data observed for the association of GlcAc with lysozyme was complicated by the mutarotation of GlcAc between its agr- and ß-anomeric forms. However, in the case of the methyl glucosides, where the conformation of each anomer is frozen, it was possible to analyze the chemical shift data in a straightforward manner, and the dissociation constant as well as the chemical shift of the acetamido methyl protons of the enzyme-inhibitor complex was determined for both anomers. The results indicate that the two anomers of methyl-GlcAc bind to lysozyme with slightly different affinities but that the acetamido methyl groups of both anomers experience identical magnetic environments in the enzyme-inhibitor complex

    Rugby World Cup 2019 injury surveillance study

    Get PDF
    Background: Full contact team sports, such as rugby union, have high incidences of injury. Injury surveillance studies underpin player welfare programmes in rugby union.Objective: To determine the incidence, severity, nature and causes of injuries sustained during the Rugby World Cup 2019.Methods: A prospective, whole population study following the definitions and procedures recommended in the consensus statement for epidemiologic studies in rugby union. Output measures included players’ age (years), stature (cm), body mass (kg), playing position, and group-level incidence (injuries/1000 player-hours), severity (days-absence), injury burden (days absence/1000 player-hours), location (%), type (%) and inciting event (%) of injuries.Results: Overall incidences of injury were 79.4 match injuries/1000 player-match-hours (95% CI: 67.4 to 93.6) and 1.5 training injuries/1000 player-training-hours (95% CI: 1.0 to 2.3). The overall mean severity of injury was 28.9 (95% CI: 20.0 to 37.8) days absence during matches and 14.8 (95% CI: 4.1 to 25.5) days absence during training. The most common locations and types of match injuries were head/face (22.4%), posterior thigh (12.6%), ligament sprain (21.7%) and muscle strain (20.3%); the ankle (24.0%), posterior thigh (16.0%), muscle strain (44.0%) and ligament sprain (16.0%) were the most common locations and types of injuries during training. Tackling (28.7%), collisions (16.9%) and running (16.9%) were responsible for most match injuries and non-contact (36.0%) and contact (32.0%) rugby skills activities for training injuries.Conclusion: The incidence, severity, nature and inciting events associated with match and training injuries at Rugby World Cup 2019 were similar to those reported for Rugby World Cups 2007, 2011 and 2015. Keywords: Rugby World Cup, injury incidence, injury severity, injury burden, injury ris

    Models and applications for measuring the impact of health research: Update of a systematic review for the health technology assessment programme

    Get PDF
    This report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review. Objectives: (1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use of these models; and (4) to evaluate the different options for the Health Technology Assessment (HTA) programme. Data sources: We searched databases including Ovid MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and The Cochrane Library from January 2005 to August 2014. Review methods: This narrative systematic literature review comprised an update, extension and analysis/discussion. We systematically searched eight databases, supplemented by personal knowledge, in August 2014 through to March 2015. Results: The literature on impact assessment has much expanded. The Payback Framework, with adaptations, remains the most widely used approach. It draws on different philosophical traditions, enhancing an underlying logic model with an interpretative case study element and attention to context. Besides the logic model, other ideal type approaches included constructionist, realist, critical and performative. Most models in practice drew pragmatically on elements of several ideal types. Monetisation of impact, an increasingly popular approach, shows a high return from research but relies heavily on assumptions about the extent to which health gains depend on research. Despite usually requiring systematic reviews before funding trials, the HTA programme does not routinely examine the impact of those trials on subsequent systematic reviews. The York/Patient-Centered Outcomes Research Institute and the Grading of Recommendations Assessment, Development and Evaluation toolkits provide ways of assessing such impact, but need to be evaluated. The literature, as reviewed here, provides very few instances of a randomised trial playing a major role in stopping the use of a new technology. The few trials funded by the HTA programme that may have played such a role were outliers. Discussion: The findings of this review support the continued use of the Payback Framework by the HTA programme. Changes in the structure of the NHS, the development of NHS England and changes in the National Institute for Health and Care Excellence’s remit pose new challenges for identifying and meeting current and future research needs. Future assessments of the impact of the HTA programme will have to take account of wider changes, especially as the Research Excellence Framework (REF), which assesses the quality of universities’ research, seems likely to continue to rely on case studies to measure impact. The HTA programme should consider how the format and selection of case studies might be improved to aid more systematic assessment. The selection of case studies, such as in the REF, but also more generally, tends to be biased towards high-impact rather than low-impact stories. Experience for other industries indicate that much can be learnt from the latter. The adoption of researchfish® (researchfish Ltd, Cambridge, UK) by most major UK research funders has implications for future assessments of impact. Although the routine capture of indexed research publications has merit, the degree to which researchfish will succeed in collecting other, non-indexed outputs and activities remains to be established. Limitations: There were limitations in how far we could address challenges that faced us as we extended the focus beyond that of the 2007 review, and well beyond a narrow focus just on the HTA programme. Conclusions: Research funders can benefit from continuing to monitor and evaluate the impacts of the studies they fund. They should also review the contribution of case studies and expand work on linking trials to meta-analyses and to guidelines. Funding: The National Institute for Health Research HTA programme

    Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy

    Full text link
    The analysis of gravitational wave data involves many model selection problems. The most important example is the detection problem of selecting between the data being consistent with instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data from ground based gravitational wave detectors is mostly conducted using classical statistics, and methods such as the Neyman-Pearson criteria are used for model selection. Future space based detectors, such as the \emph{Laser Interferometer Space Antenna} (LISA), are expected to produced rich data streams containing the signals from many millions of sources. Determining the number of sources that are resolvable, and the most appropriate description of each source poses a challenging model selection problem that may best be addressed in a Bayesian framework. An important class of LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands of which will be detectable. Not only are the number of sources unknown, but so are the number of parameters required to model the waveforms. For example, a significant subset of the resolvable galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor between competing models. Here we explore various methods for computing Bayes factors in the context of determining which galactic binaries have measurable frequency evolution. The methods explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to the model evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure

    Early Universe Constraints on Time Variation of Fundamental Constants

    Full text link
    We study the time variation of fundamental constants in the early Universe. Using data from primordial light nuclei abundances, CMB and the 2dFGRS power spectrum, we put constraints on the time variation of the fine structure constant α\alpha, and the Higgs vacuum expectation value withoutassuminganytheoreticalframework.Avariationin without assuming any theoretical framework. A variation in leads to a variation in the electron mass, among other effects. Along the same line, we study the variation of α\alpha and the electron mass mem_e. In a purely phenomenological fashion, we derive a relationship between both variations.Comment: 18 pages, 12 figures, accepted for publication in Physical Review

    A Bayesian Approach to the Detection Problem in Gravitational Wave Astronomy

    Full text link
    The analysis of data from gravitational wave detectors can be divided into three phases: search, characterization, and evaluation. The evaluation of the detection - determining whether a candidate event is astrophysical in origin or some artifact created by instrument noise - is a crucial step in the analysis. The on-going analyses of data from ground based detectors employ a frequentist approach to the detection problem. A detection statistic is chosen, for which background levels and detection efficiencies are estimated from Monte Carlo studies. This approach frames the detection problem in terms of an infinite collection of trials, with the actual measurement corresponding to some realization of this hypothetical set. Here we explore an alternative, Bayesian approach to the detection problem, that considers prior information and the actual data in hand. Our particular focus is on the computational techniques used to implement the Bayesian analysis. We find that the Parallel Tempered Markov Chain Monte Carlo (PTMCMC) algorithm is able to address all three phases of the anaylsis in a coherent framework. The signals are found by locating the posterior modes, the model parameters are characterized by mapping out the joint posterior distribution, and finally, the model evidence is computed by thermodynamic integration. As a demonstration, we consider the detection problem of selecting between models describing the data as instrument noise, or instrument noise plus the signal from a single compact galactic binary. The evidence ratios, or Bayes factors, computed by the PTMCMC algorithm are found to be in close agreement with those computed using a Reversible Jump Markov Chain Monte Carlo algorithm.Comment: 19 pages, 12 figures, revised to address referee's comment
    • …
    corecore