826 research outputs found

    Defined Contribution Pension Plans in the Public Sector: A Benchmark Analysis

    Get PDF
    This chapter assesses best practice benchmarks for the design of defined contribution plans in the public sector, where such plans are the primary, or core, employment-based retirement benefit. These benchmarks rely on the notion that providing an adequate and secure retirement income for participants is the primary plan objective

    The Moses–Littenberg meta-analytical method generates systematic differences in test accuracy compared to hierarchical meta-analytical models

    Get PDF
    AbstractObjectiveTo compare meta-analyses of diagnostic test accuracy using the Moses–Littenberg summary receiver operating characteristic (SROC) approach with those of the hierarchical SROC (HSROC) model.Study Design and SettingTwenty-six data sets from existing test accuracy systematic reviews were reanalyzed with the Moses–Littenberg model, using equal weighting (“E-ML”) and weighting by the inverse variance of the log DOR (“W-ML”), and with the HSROC model. The diagnostic odds ratios (DORs) were estimated and covariates added to both models to estimate relative DORs (RDORs) between subgroups. Models were compared by calculating the ratio of DORs, the ratio of RDORs, and P-values for detecting asymmetry and effects of covariates on DOR.ResultsCompared to the HSROC model, the Moses–Littenberg model DOR estimates were a median of 22% (“E-ML”) and 47% (“W-ML”) lower at Q*, and 7% and 42% lower at the central point in the data. Instances of the ML models giving estimates higher than the HSROC model also occurred. Investigations of heterogeneity also differed; the Moses–Littenberg models on average estimating smaller differences in RDOR.ConclusionsMoses–Littenberg meta-analyses can generate lower estimates of test accuracy, and smaller differences in accuracy, compared to mathematically superior hierarchical models. This has implications for the usefulness of meta-analyses using this approach. We recommend meta-analysis of diagnostic test accuracy studies to be conducted using available hierarchical model–based approaches

    All coffee types decrease the risk of adverse clinical outcomes in chronic liver disease: A UK Biobank study

    Get PDF
    Abstract Background Chronic liver disease (CLD) is a growing cause of morbidity and mortality worldwide, particularly in low to middle-income countries with high disease burden and limited treatment availability. Coffee consumption has been linked with lower rates of CLD, but little is known about the effects of different coffee types, which vary in chemical composition. This study aimed to investigate associations of coffee consumption, including decaffeinated, instant and ground coffee, with chronic liver disease outcomes. Methods A total of 494,585 UK Biobank participants with known coffee consumption and electronic linkage to hospital, death and cancer records were included in this study. Cox regression was used to estimate hazard ratios (HR) of incident CLD, incident CLD or steatosis, incident hepatocellular carcinoma (HCC) and death from CLD according to coffee consumption of any type as well as for decaffeinated, instant and ground coffee individually. Results Among 384,818 coffee drinkers and 109,767 non-coffee drinkers, there were 3600 cases of CLD, 5439 cases of CLD or steatosis, 184 cases of HCC and 301 deaths from CLD during a median follow-up of 10.7 years. Compared to non-coffee drinkers, coffee drinkers had lower adjusted HRs of CLD (HR 0.79, 95% CI 0.72–0.86), CLD or steatosis (HR 0.80, 95% CI 0.75–0.86), death from CLD (HR 0.51, 95% CI 0.39–0.67) and HCC (HR 0.80, 95% CI 0.54–1.19). The associations for decaffeinated, instant and ground coffee individually were similar to all types combined. Conclusion The finding that all types of coffee are protective against CLD is significant given the increasing incidence of CLD worldwide and the potential of coffee as an intervention to prevent CLD onset or progression

    Evolution of European prostate cancer screening protocols and summary of ongoing trials

    Get PDF
    Population-based organised repeated screening for prostate cancer has been found to reduce disease-specific mortality, but with substantial overdiagnosis leading to overtreatment. Although only very few countries have implemented a screening programme on a national level, individual prostate-specific antigen (PSA) testing is common. This opportunistic testing may have little favourable impact, while stressing the side-effects. The classic early detection protocols as were state-of-the-art in the 1990s applied a PSA and digital rectal examination threshold for sextant systematic prostate biopsy, with a fixed interval for re-testing, and limited indication for expectant management. In the three decades since these trials were started, different important improvements have become available in the cascade of screening, indication for biopsy, and treatment. The main developed aspects include: better identification of individuals at risk (using early/baseline PSA, family history, and/or genetic profile), individualised re-testing interval, optimised and individualised starting and stopping age, with gradual invitation at a fixed age rather than invitation of a wider range of age groups, risk stratification for biopsy (using PSA density, risk calculator, magnetic resonance imaging, serum and urine biomarkers, or combinations/sequences), targeted biopsy, transperineal biopsy approach, active surveillance for low-risk prostate cancer, and improved staging of disease. All these developments are suggested to decrease the side-effects of screening, while at least maintaining the advantages, but Level 1 evidence is lacking. The knowledge gained and new developments on early detection are being tested in different prospective screening trials throughout Europe. In addition, the European Union-funded PRostate cancer Awareness and Initiative for Screening in the European Union (PRAISE-U) project will compare and evaluate different screening pilots throughout Europe. Implementation and sustainability will also be addressed. Modern screening approaches may reduce the burden of the second most frequent cause of cancer-related death in European males, while minimising side-effects. Also, less efficacious opportunistic early detection may be indirectly reduced.</p

    Methacholine and PDGF activate store-operated calcium entry in neuronal precursor cells via distinct calcium entry channels

    Get PDF
    Neurons are a diverse cell type exhibiting hugely different morphologies and neurotransmitter specifications. Their distinctive phenotypes are established during differentiation from pluripotent precursor cells. The signalling pathways that specify the lineage down which neuronal precursor cells differentiate remain to be fully elucidated. Among the many signals that impinge on the differentiation of neuronal cells, cytosolic calcium (Ca2+) has an important role. However, little is known about the nature of the Ca2+ signals involved in fate choice in neuronal precursor cells, or their sources. In this study, we show that activation of either muscarinic or platelet-derived growth factor (PDGF) receptors induces a biphasic increase in cytosolic Ca2+ that consists of release from intracellular stores followed by sustained entry across the plasma membrane. For both agonists, the prolonged Ca2+ entry occurred via a store-operated pathway that was pharmacologically indistinguishable from Ca2+ entry initiated by thapsigargin. However, muscarinic receptor-activated Ca2+ entry was inhibited by siRNA-mediated knockdown of TRPC6, whereas Ca2+ entry evoked by PDGF was not. These data provide evidence for agonist-specific activation of molecularly distinct store-operated Ca2+ entry pathways, and raise the possibility of privileged communication between these Ca2+ entry pathways and downstream processes

    Inequity in access to transplantation in the UK

    Get PDF
    Background and objectives Despite the presence of a universal health care system, it is unclear if there is intercenter variation in access to kidney transplantation in the United Kingdom. This study aims to assess whether equity exists in access to kidney transplantation in the United Kingdom after adjustment for patient-specific factors and center practice patterns. Design, setting, participants, & measurements In this prospective, observational cohort study including all 71 United Kingdom kidney centers, incident RRT patients recruited between November 2011 and March 2013 as part of the Access to Transplantation and Transplant Outcome Measures study were analyzed to assess preemptive listing (n=2676) and listing within 2 years of starting dialysis (n=1970) by center. Results Seven hundred and six participants (26%) were listed preemptively, whereas 585 (30%) were listed within 2 years of commencing dialysis. The interquartile range across centers was 6%–33% for preemptive listing and 25%–40% for listing after starting dialysis. Patient factors, including increasing age, most comorbidities, body mass index >35 kg/m2, and lower socioeconomic status, were associated with a lower likelihood of being listed and accounted for 89% and 97% of measured intercenter variation for preemptive listing and listing within 2 years of starting dialysis, respectively. Asian (odds ratio, 0.49; 95% confidence interval, 0.33 to 0.72) and Black (odds ratio, 0.43; 95% confidence interval, 0.26 to 0.71) participants were both associated with reduced access to preemptive listing; however Asian participants were associated with a higher likelihood of being listed after starting dialysis (odds ratio, 1.42; 95% confidence interval, 1.12 to 1.79). As for center factors, being registered at a transplanting center (odds ratio, 3.1; 95% confidence interval, 2.36 to 4.07) and a universal approach to discussing transplantation (odds ratio, 1.4; 95% confidence interval, 1.08 to 1.78) were associated with higher preemptive listing, whereas using a written protocol was associated negatively with listing within 2 years of starting dialysis (odds ratio, 0.7; 95% confidence interval, 0.58 to 0.9). Conclusions Patient case mix accounts for most of the intercenter variation seen in access to transplantation in the United Kingdom, with practice patterns also contributing some variation. Socioeconomic inequity exists despite having a universal health care system

    Validity of estimated prevalence of decreased kidney function and renal replacement therapy from primary care electronic health records compared with national survey and registry data in the United Kingdom.

    Get PDF
    BACKGROUND: Anonymous primary care records are an important resource for observational studies. However, their external validity is unknown in identifying the prevalence of decreased kidney function and renal replacement therapy (RRT). We thus compared the prevalence of decreased kidney function and RRT in the Clinical Practice Research Datalink (CPRD) with a nationally representative survey and national registry. METHODS: Among all people ≄25 years of age registered in the CPRD for ≄1 year on 31 March 2014, we identified patients with an estimated glomerular filtration rate (eGFR) <60 mL/min/1.73 m2, according to their most recent serum creatinine in the past 5 years using the Chronic Kidney Disease Epidemiology Collaboration equation and patients with recorded diagnoses of RRT. Denominators were the entire population in each age-sex band irrespective of creatinine measurement. The prevalence of eGFR <60 mL/min/1.73 m2 was compared with that in the Health Survey for England (HSE) 2009/2010 and the prevalence of RRT was compared with that in the UK Renal Registry (UKRR) 2014. RESULTS: We analysed 2 761 755 people in CPRD [mean age 53 (SD 17) years, men 49%], of whom 189 581 (6.86%) had an eGFR <60 mL/min/1.73 m2 and 3293 (0.12%) were on RRT. The prevalence of eGFR <60 mL/min/1.73 m2 in CPRD was similar to that in the HSE and the prevalence of RRT was close to that in the UKRR across all age groups in men and women, although the small number of younger patients with an eGFR <60 mL/min/1.73 m2 in the HSE might have hampered precise comparison. CONCLUSIONS: UK primary care data have good external validity for the prevalence of decreased kidney function and RRT

    Prediction of liver disease in patients whose liver function tests have been checked in primary care : model development and validation using population-based observational cohorts

    Get PDF
    This work was supported by the UK National Health Service Research & Development Programme Health Technology Assessment Programme (project number 03/38/02) and also by the Backett Weir Russell Career Development Fellowship, University of Aberdeen.OBJECTIVE: To derive and validate a clinical prediction model to estimate the risk of liver disease diagnosis following liver function tests (LFTs) and to convert the model to a simplified scoring tool for use in primary care. DESIGN: Population-based observational cohort study of patients in Tayside Scotland identified as having their LFTs performed in primary care and followed for 2 years. Biochemistry data were linked to secondary care, prescriptions and mortality data to ascertain baseline characteristics of the derivation cohort. A separate validation cohort was obtained from 19 general practices across the rest of Scotland to externally validate the final model. SETTING: Primary care, Tayside, Scotland. PARTICIPANTS: Derivation cohort: LFT results from 310 511 patients. After exclusions (including: patients under 16 years, patients having initial LFTs measured in secondary care, bilirubin >35 Όmol/L, liver complications within 6 weeks and history of a liver condition), the derivation cohort contained 95 977 patients with no clinically apparent liver condition. Validation cohort: after exclusions, this cohort contained 11 653 patients. PRIMARY AND SECONDARY OUTCOME MEASURES: Diagnosis of a liver condition within 2 years. RESULTS: From the derivation cohort (n=95 977), 481 (0.5%) were diagnosed with a liver disease. The model showed good discrimination (C-statistic=0.78). Given the low prevalence of liver disease, the negative predictive values were high. Positive predictive values were low but rose to 20-30% for high-risk patients. CONCLUSIONS: This study successfully developed and validated a clinical prediction model and subsequent scoring tool, the Algorithm for Liver Function Investigations (ALFI), which can predict liver disease risk in patients with no clinically obvious liver disease who had their initial LFTs taken in primary care. ALFI can help general practitioners focus referral on a small subset of patients with higher predicted risk while continuing to address modifiable liver disease risk factors in those at lower risk.Publisher PDFPeer reviewe
    • 

    corecore