34 research outputs found

    A comparison of some methods for detection of safety signals in randomised controlled trials

    Get PDF
    The occurrence, severity and duration of patient adverse events are routinely recorded during randomised controlled clinical trials. This data may be used by a trial’s Data Safety Monitoring Committee to make decisions regarding the safety of treatments and in some cases may lead to the discontinuation of a trial if real safety issues are detected. Consequently the analysis of this data is a very important part of the conduct of any trial. There are many different types of adverse event and the statistical analysis of this data must take into account multiple comparison issues when performing statistical tests. Unadjusted tests may lead to large numbers of false positive results, but simple adjustments are generally too conservative and risk compromising the power to detect important treatment differences. Mathematically there are a number of different approaches to analysing safety data with general error controlling procedures, recurrent event analysis, survival analysis and other direct modelling approaches (both Bayesian and Frequentist) all being used. Recently a variety of classical (Mehrotra and Adewale, 2012) and Bayesian (Berry and Berry, 2004; DuMouchel, 2010) methods have been proposed to address this problem. These methods use possible relationships or groupings of the adverse events. We implement and compare by way of a simulation study of grouped data some of these more recent approaches to adverse event analysis and investigate if the use of a common underlying model which involves groupings of adverse events by body-system or System Organ Class is useful in detecting adverse events associated with treatments. All of the group methods detect more correct significant effects than the Benjamini-Hochberg or Bonferroni procedures for this type of data. In particular the body-system as described by Berry and Berry (2004) looks to be a worthwhile structure to consider for use when modelling adverse event data

    Detection of safety signals in randomised controlled trials

    Get PDF
    The occurrence, severity, and duration of patient adverse events are routinely recorded during randomised clinical trials. This data is used by a trial's Data Monitoring Committee to make decisions regarding the safety of a treatment and may lead to the alteration or discontinuation of a trial if real safety issues are detected. There are many different types of adverse event and the statistical analysis of this data, particularly with regard to hypothesis testing, must take into account potential multiple comparison issues.;Unadjusted hypothesis tests may lead to large numbers of false positive results, but simple adjustments are generally too conservative. In addition, the anticipated effect sizes of adverse events in clinical trials are generally small and consequently the power to detect such effects is low.;A number of recent classical and Bayesian methods, which use groupings of adverse events, have been proposed to address this problem. We illustrate and compare a number of these approaches, and investigate if their use of a common underlying model, which involves groupings of adverse events by body-system or System Organ Class, is useful in detecting adverse events associated with treatments.;For data where this type of grouped approach is appropriate, the methods considered are shown to correctly flag more adverse event effects than standard approaches, while maintaining control of the overall error rate.;While controlling for multiple types of adverse event, these proposed methods do not take into account event timings or patient exposure time, and are more suited to end of trial analysis. In order to address the desire for the early detection of safety issues in clinical trials a number of Bayesian methods are introduced to analyse the accumulation of adverse events as the trial progresses, taking into account event timing, patient time in study, and body-system.;These methods are suitable for use at interim trial safety analyses. The models which performed best were those that had a common body-system dependence over the duration of the trial.The occurrence, severity, and duration of patient adverse events are routinely recorded during randomised clinical trials. This data is used by a trial's Data Monitoring Committee to make decisions regarding the safety of a treatment and may lead to the alteration or discontinuation of a trial if real safety issues are detected. There are many different types of adverse event and the statistical analysis of this data, particularly with regard to hypothesis testing, must take into account potential multiple comparison issues.;Unadjusted hypothesis tests may lead to large numbers of false positive results, but simple adjustments are generally too conservative. In addition, the anticipated effect sizes of adverse events in clinical trials are generally small and consequently the power to detect such effects is low.;A number of recent classical and Bayesian methods, which use groupings of adverse events, have been proposed to address this problem. We illustrate and compare a number of these approaches, and investigate if their use of a common underlying model, which involves groupings of adverse events by body-system or System Organ Class, is useful in detecting adverse events associated with treatments.;For data where this type of grouped approach is appropriate, the methods considered are shown to correctly flag more adverse event effects than standard approaches, while maintaining control of the overall error rate.;While controlling for multiple types of adverse event, these proposed methods do not take into account event timings or patient exposure time, and are more suited to end of trial analysis. In order to address the desire for the early detection of safety issues in clinical trials a number of Bayesian methods are introduced to analyse the accumulation of adverse events as the trial progresses, taking into account event timing, patient time in study, and body-system.;These methods are suitable for use at interim trial safety analyses. The models which performed best were those that had a common body-system dependence over the duration of the trial

    A Bayesian hierarchical approach for multiple outcomes in routinely collected healthcare data

    Get PDF
    Clinical trials are the standard approach for evaluating new treatments, but may lack the power to assess rare outcomes. Trial results are also necessarily restricted to the population considered in the study. The availability of routinely collected healthcare data provides a source of information on the performance of treatments beyond that offered by clinical trials, but the analysis of this type of data presents a number of challenges. Hierarchical methods, which take advantage of known relationships between clinical outcomes, while accounting for bias, may be a suitable statistical approach for the analysis of this data. A study of direct oral anticoagulants in Scotland is discussed and used to motivate a modeling approach. A Bayesian hierarchical model, which allows a stratification of the population into clusters with similar characteristics, is proposed and applied to the direct oral anticoagulant study data. A simulation study is used to assess its performance in terms of outcome detection and error rates

    Bayesian hierarchical approaches for multiple outcomes in routinely collected healthcare data

    Get PDF
    Background: Routinely collected healthcare data provides a rich environment for the investigation of drug performance in the general population, while also offering the possibility of assessing rare outcomes. The statistical analysis of this data poses a number of challenges. The data may be biased and lack the structure and balance provided by the drugs’ clinical trials. Outcomes are often modelled individually with an associated lack of control for multiple comparisons, as well as a difficulty in assessing multiple risks. Methods: Bayesian models provide methods for analysing multiple clinical outcomes, using relationships between outcomes and handling the types of multiple comparison issues which may occur when using multiple single-variate approaches. Lack of balance within the data may be catered for by dividing the population into clusters with similar characteristics, allowing within cluster inferences to be made. A Bayesian hierarchical model for multiple outcomes is proposed and applied to data from a safety and effectiveness study of direct oral anticoagulants (DOACs) in Scotland 2009 – 2015. Results: The Bayesian modelling results were comparable to the results from the original safety and effectiveness study, with the additional benefit of balancing patient clusters and controlling for relationships in the data. Conclusion: Bayesian hierarchical models are a suitable approach for modelling routinely collected healthcare data. There is the possibility of moving to an integrated Bayesian approach, with the inclusion of treatment relationships; uncertainty regarding cluster membership; and treatment allocation in the model, eventually leading to more reliable treatment decisions

    Crystal structure of rhodopsin bound to arrestin by femtosecond X-ray laser.

    Get PDF
    G-protein-coupled receptors (GPCRs) signal primarily through G proteins or arrestins. Arrestin binding to GPCRs blocks G protein interaction and redirects signalling to numerous G-protein-independent pathways. Here we report the crystal structure of a constitutively active form of human rhodopsin bound to a pre-activated form of the mouse visual arrestin, determined by serial femtosecond X-ray laser crystallography. Together with extensive biochemical and mutagenesis data, the structure reveals an overall architecture of the rhodopsin-arrestin assembly in which rhodopsin uses distinct structural elements, including transmembrane helix 7 and helix 8, to recruit arrestin. Correspondingly, arrestin adopts the pre-activated conformation, with a ∼20° rotation between the amino and carboxy domains, which opens up a cleft in arrestin to accommodate a short helix formed by the second intracellular loop of rhodopsin. This structure provides a basis for understanding GPCR-mediated arrestin-biased signalling and demonstrates the power of X-ray lasers for advancing the frontiers of structural biology

    Oral anticoagulants in patients with atrial fibrillation at low stroke risk: a multicentre observational study

    Get PDF
    AIMS: There is currently no consensus on whether atrial fibrillation (AF) patients at low risk for stroke (one non-sex-related CHA2DS2-VASc point) should be treated with an oral anticoagulant. METHODS AND RESULTS: We conducted a multi-country cohort study in Sweden, Denmark, Norway, and Scotland. In total, 59 076 patients diagnosed with AF at low stroke risk were included. We assessed the rates of stroke or major bleeding during treatment with a non-vitamin K antagonist oral anticoagulant (NOAC), a vitamin K antagonist (VKA), or no treatment, using inverse probability of treatment weighted (IPTW) Cox regression. In untreated patients, the rate for ischaemic stroke was 0.70 per 100 person-years and the rate for a bleed was also 0.70 per 100 person-years. Comparing NOAC with no treatment, the stroke rate was lower [hazard ratio (HR) 0.72; 95% confidence interval (CI) 0.56-0.94], and the rate for intracranial haemorrhage (ICH) was not increased (HR 0.84; 95% CI 0.54-1.30). Comparing VKA with no treatment, the rate for stroke tended to be lower (HR 0.81; 95% CI 0.59-1.09), and the rate for ICH tended to be higher during VKA treatment (HR 1.37; 95% CI 0.88-2.14). Comparing NOAC with VKA treatment, the rate for stroke was similar (HR 0.92; 95% CI 0.70-1.22), but the rate for ICH was lower during NOAC treatment (HR 0.63; 95% CI 0.42-0.94). CONCLUSION: These observational data suggest that NOAC treatment may be associated with a positive net clinical benefit compared with no treatment or VKA treatment in patients at low stroke risk, a question that can be tested through a randomized controlled trial

    Effectiveness of the 23-valent pneumococcal polysaccharide vaccine against Invasive Pneumococcal Disease incidence in European adults aged 65 years and above : results of SpIDnet/I-MOVE+ multicentre study (2012-2016)

    Get PDF
    Background and Aims: We measured the effectiveness of 23-valent pneumococcal polysaccharidic vaccine(PPV23) against invasive pneumococcal disease (IPD) in 65+ year-olds, pooling surveillance data from seven European sites. PPV23 vaccination is recommended in all sites (8-69% uptake) and PCV13 in high risk groups in two sites (<5%uptake). Methods: We compared the vaccination status of IPD cases caused byPPV23 serotypes (cases) to that of nonPPV23 IPD (controls) notified between2012 and 2016. We defined PPV23 vaccination as at least one dose. PPV23 pooled effectiveness was calculated as (1 –odds ratio of vaccination)*100, adjusted for site, age, sex, underlying conditions and year. We stratified PPV23effectiveness by time since last dose of vaccine: <2, 2-4, 5-9 and 10+years. Results: We included 2011 cases and 878 controls. Compared to controls,cases were younger (p=0.001), less likely to have an underlying condition(p=0.025), more likely to be admitted for intensive care (p=0.038) and to have pneumonia (p=0.005). PPV23 effectiveness was 24% (95%CI: 4; 41) against PPV23-serotypes.By serotype, PPV23 effectiveness ranged between -2% (95%CI: -48; 30) against serotype 3 (n=687) and 55% (95%CI: 15; 76) against serotype 9N IPD (n=540). By years since vaccination, PPV23 effectiveness was 43% (95%CI: 3-66) and 15%(95%CI: -25; 43) for <2 years and 10+ years, respectively. Conclusion: Our findings suggest a low PPV23 effectiveness against IPD caused by PPV23serotypes in the elderly, varying by serotype, and higher in the first two years after vaccination. Despite low effectiveness, PPV23 in the elderly may prevent at least 25% of cases among vaccinated

    The adverse impact of COVID-19 pandemic on cardiovascular disease prevention and management in England, Scotland and Wales: A population-scale analysis of trends in medication data

    Get PDF
    Objectives To estimate the impact of the COVID-19 pandemic on cardiovascular disease (CVD) and CVD management using routinely collected medication data as a proxy. Design Descriptive and interrupted time series analysis using anonymised individual-level population-scale data for 1.32 billion records of dispensed CVD medications across 15.8 million individuals in England, Scotland and Wales. Setting Community dispensed CVD medications with 100% coverage from England, Scotland and Wales, plus primary care prescribed CVD medications from England (including 98% English general practices). Participants 15.8 million individuals aged 18+ years alive on 1 st April 2018 dispensed at least one CVD medicine in a year from England, Scotland and Wales. Main outcome measures Monthly counts, percent annual change (1 st April 2018 to 31 st July 2021) and annual rates (1 st March 2018 to 28 th February 2021) of medicines dispensed by CVD/ CVD risk factor; prevalent and incident use. Results Year-on-year change in dispensed CVD medicines by month were observed, with notable uplifts ahead of the first (11.8% higher in March 2020) but not subsequent national lockdowns. Using hypertension as one example of the indirect impact of the pandemic, we observed 491,203 fewer individuals initiated antihypertensive treatment across England, Scotland and Wales during the period March 2020 to end May 2021 than would have been expected compared to 2019. We estimated that this missed antihypertension treatment could result in 13,659 additional CVD events should individuals remain untreated, including 2,281 additional myocardial infarctions (MIs) and 3,474 additional strokes. Incident use of lipid-lowering medicines decreased by an average 14,793 per month in early 2021 compared with the equivalent months prior to the pandemic in 2019. In contrast, the use of incident medicines to treat type-2 diabetes (T2DM) increased by approximately 1,642 patients per month. Conclusions Management of key CVD risk factors as proxied by incident use of CVD medicines has not returned to pre-pandemic levels in the UK. Novel methods to identify and treat individuals who have missed treatment are urgently required to avoid large numbers of additional future CVD events, further adding indirect cost of the COVID-19 pandemic

    COVID-19 trajectories among 57 million adults in England: a cohort study using electronic health records

    Get PDF
    BACKGROUND: Updatable estimates of COVID-19 onset, progression, and trajectories underpin pandemic mitigation efforts. To identify and characterise disease trajectories, we aimed to define and validate ten COVID-19 phenotypes from nationwide linked electronic health records (EHR) using an extensible framework. METHODS: In this cohort study, we used eight linked National Health Service (NHS) datasets for people in England alive on Jan 23, 2020. Data on COVID-19 testing, vaccination, primary and secondary care records, and death registrations were collected until Nov 30, 2021. We defined ten COVID-19 phenotypes reflecting clinically relevant stages of disease severity and encompassing five categories: positive SARS-CoV-2 test, primary care diagnosis, hospital admission, ventilation modality (four phenotypes), and death (three phenotypes). We constructed patient trajectories illustrating transition frequency and duration between phenotypes. Analyses were stratified by pandemic waves and vaccination status. FINDINGS: Among 57 032 174 individuals included in the cohort, 13 990 423 COVID-19 events were identified in 7 244 925 individuals, equating to an infection rate of 12·7% during the study period. Of 7 244 925 individuals, 460 737 (6·4%) were admitted to hospital and 158 020 (2·2%) died. Of 460 737 individuals who were admitted to hospital, 48 847 (10·6%) were admitted to the intensive care unit (ICU), 69 090 (15·0%) received non-invasive ventilation, and 25 928 (5·6%) received invasive ventilation. Among 384 135 patients who were admitted to hospital but did not require ventilation, mortality was higher in wave 1 (23 485 [30·4%] of 77 202 patients) than wave 2 (44 220 [23·1%] of 191 528 patients), but remained unchanged for patients admitted to the ICU. Mortality was highest among patients who received ventilatory support outside of the ICU in wave 1 (2569 [50·7%] of 5063 patients). 15 486 (9·8%) of 158 020 COVID-19-related deaths occurred within 28 days of the first COVID-19 event without a COVID-19 diagnoses on the death certificate. 10 884 (6·9%) of 158 020 deaths were identified exclusively from mortality data with no previous COVID-19 phenotype recorded. We observed longer patient trajectories in wave 2 than wave 1. INTERPRETATION: Our analyses illustrate the wide spectrum of disease trajectories as shown by differences in incidence, survival, and clinical pathways. We have provided a modular analytical framework that can be used to monitor the impact of the pandemic and generate evidence of clinical and policy relevance using multiple EHR sources. FUNDING: British Heart Foundation Data Science Centre, led by Health Data Research UK
    corecore