2,718 research outputs found
A validation study of the CirCom comorbidity score in an English cirrhosis population using the Clinical Practice Research Datalink
Purpose: The CirCom score has been developed from Danish data as a specific measure of comorbidity for cirrhosis to predict all-cause mortality. We compared its performance with the Charlson Comorbidity Index (CCI) in an English cirrhosis population.
Patients and methods: We used comorbidity scores in a survival model to predict mortality in a cirrhosis cohort in the Clinical Practice Research Datalink. The discrimination of each score was compared by age, gender, socioeconomic status, cirrhosis etiology, cirrhosis stage, and year after cirrhosis diagnosis. We also measured their ability to predict liver-related versus non-liver-related death.
Results: There was a small improvement in the C statistic from the model using the CirCom score (C=0.63) compared to the CCI (C=0.62), and there was an overall improvement in the net reclassification index of 1.5%. The improvement was more notable in younger patients, those with an alcohol etiology, and those with compensated cirrhosis. Both scores performed better (C statistic >0.7) for non-liver-related deaths than liver-related deaths (C statistic <0.6), as comorbidity was only weakly predictive of liver-related death.
Conclusion: The CirCom score provided a small improvement in performance over the CCI in the prediction of all-cause and non-liver mortality, but not liver-related mortality. Therefore, it is important to include a measure of comorbidity in studies of cirrhosis survival, alongside a measure of cirrhosis severity
A comparison of the recording of comorbidity in primary and secondary care by using the Charlson Index to predict short-term and long-term survival in a routine linked data cohort
OBJECTIVE: Hospital admission records provide snapshots of clinical histories for a subset of the population admitted to hospital. In contrast, primary care records provide continuous clinical histories for complete populations, but might lack detail about inpatient stays. Therefore, combining primary and secondary care records should improve the ability of comorbidity scores to predict survival in population-based studies, and provide better adjustment for case-mix differences when assessing mortality outcomes.
DESIGN: Cohort study.
SETTING: English primary and secondary care 1 January 2005 to 1 January 2010.
PARTICIPANTS: All patients 20 years and older registered to a primary care practice contributing to the linked Clinical Practice Research Datalink from England.
OUTCOME: The performance of the Charlson index with mortality was compared when derived from either primary or secondary care data or both. This was assessed in relation to short-term and long-term survival, age, consultation rate, and specific acute and chronic diseases.
RESULTS: 657,264 people were followed up from 1 January 2005. Although primary care recorded more comorbidity than secondary care, the resulting C statistics for the Charlson index remained similar: 0.86 and 0.87, respectively. Higher consultation rates and restricted age bands reduced the performance of the Charlson index, but the index's excellent performance persisted over longer follow-up; the C statistic was 0.87 over 1 year, and 0.85 over all 5 years of follow-up. The Charlson index derived from secondary care comorbidity had a greater effect than primary care comorbidity in reducing the association of upper gastrointestinal bleeding with mortality. However, they had a similar effect in reducing the association of diabetes with mortality.
CONCLUSIONS: These findings support the use of the Charlson index from linked data and show that secondary care comorbidity coding performed at least as well as that derived from primary care in predicting survival
THI APPLICATION TO INSURING AGAINST HEAT STRESS IN DAIRY COWS
Heat stress is associated with reduced milk production in dairy cows. Insurance instruments based on an index of ambient temperature and relative humidity measured at Macon, Georgia and Tallahassee, Florida are shown to reduce net revenue risk for a representative farm in south-central Georgia.Risk and Uncertainty,
Time Out of General Surgery Specialty training in the UK:A National Database Study
ObjectiveGeneral surgery specialty training in the United Kingdom takes 6 years and allows trainees to take time out of training. Studies from the United States have highlighted an increasing trend for taking time out of surgical training for research. This study aimed to evaluate trends in time out of training and the impact on the duration of UK general surgical specialty training.Design, setting, and participantsA cohort study using routinely collected surgical training data from the Intercollegiate Surgical Curriculum Program database for General surgery trainees registered from August 1, 2007. Trainees were classified as Completed Training or In-Training. Out of training periods were identified and time in training calculated (both unadjusted and adjusted for out of training periods) with a predicted time in training for those In-Training.ResultsOf the trainees still In-Training (n = 994), a greater proportion had taken time out of training compared with those who had completed training (n = 360; 54.5% vs 45.9%, p < 0.01). A greater proportion of the In-Training group had undertaken a formal research period compared with the Completed Training group (35.1% vs 6.1%, p < 0.01). Total unadjusted training time in the Completed Training group was a median 6.0 (interquartile range 6.0-7.0) years compared with a predicted unadjusted training time in the In-Training group, with an out of training period recorded, of a median 8.0 (interquartile range 7.0-9.0) years.ConclusionsTrainees are increasingly taking time out of surgical training, particularly for research, with a subsequent increase in total time of training. This should be considered when redesigning surgical training programs and planning the future surgical workforce
The use of a bayesian hierarchy to develop and validate a co-morbidity score to predict mortality for linked primary and secondary care data from the NHS in England
Background: We have assessed whether the linkage between routine primary and secondary care records provided an opportunity to develop an improved population based co-morbidity score with the combined information on co-morbidities from both health care settings.
Methods: We extracted all people older than 20 years at the start of 2005 within the linkage between the Hospital Episodes Statistics, Clinical Practice Research Datalink, and Office for National Statistics death register in England. A random 50% sample was used to identify relevant diagnostic codes using a Bayesian hierarchy to share information between similar Read and ICD 10 code groupings. Internal validation of the score was performed in the remaining 50% and discrimination was assessed using Harrell’s C statistic. Comparisons were made over time, age, and consultation rate with the Charlson and Elixhauser indexes.
Results: 657,264 people were followed up from the 1st January 2005. 98 groupings of codes were derived from the Bayesian hierarchy, and 37 had an adjusted weighting of greater than zero in the Cox proportional hazards model. 11 of these groupings had a different weighting dependent on whether they were coded from hospital or primary care. The C statistic reduced from 0.88 (95% confidence interval 0.88–0.88) in the first year of follow up, to 0.85 (0.85–0.85) including all 5 years. When we stratified the linked score by consultation rate the association with mortality remained consistent, but there was a significant interaction with age, with improved discrimination and fit in those under 50 years old (C=0.85, 0.83–0.87) compared to the Charlson (C=0.79, 0.77–0.82) or Elixhauser index (C=0.81, 0.79–0.83).
Conclusions: The use of linked population based primary and secondary care data developed a co-morbidity score that had improved discrimination, particularly in younger age groups, and had a greater effect when adjusting for co-morbidity than existing scores
Changing Autonomy in Operative Experience Through UK General Surgery Training:A National Cohort Study
Objectives:To determine the operative experience of UK general surgery trainees and assess the changing procedural supervision and acquisition of competency assessments through the course of training.Background Summary Data: Competency assessment is changing with concepts of trainee autonomy decisions (termed entrustment decisions) being introduced to surgical training.Methods: Data from the Intercollegiate Surgical Curriculum Programme (ISCP) and the eLogbook databases for all UK General Surgery trainees registered from 1st August 2007 who had completed training were used. Total and index procedures (IP) were counted and variation by year of training assessed. Recorded supervision codes and competency assessment outcomes for IPs were assessed by year of training.Results: We identified 311 trainees with complete data. Appendicectomy was the most frequently undertaken IP during first year of training (mean procedures (mp) = 26) and emergency laparotomy during final year of training (mp = 27). The proportion of all IPs recorded as unsupervised increased through training (
The risk of community-acquired pneumonia among 9803 patients with coeliac disease compared to the general population: a cohort study
Background: Patients with coeliac disease are considered as individuals for whom pneumococcal vaccination is advocated.
Aim: To quantify the risk of community-acquired pneumonia among patients with coeliac disease, assessing whether vaccination against streptococcal pneumonia modified this risk.
Methods: We identified all patients with coeliac disease within the Clinical Practice Research Datalink linked with English Hospital Episodes Statistics between April 1997 and March 2011 and up to 10 controls per patient with coeliac disease frequency matched in 10-year age bands. Absolute rates of community-acquired pneumonia were calculated for patients with coeliac disease compared to controls stratified by vaccination status and time of diagnosis using Cox regression in terms of adjusted hazard ratios (HR).
Results: Among 9803 patients with coeliac disease and 101 755 controls, respectively, there were 179 and 1864 first community-acquired pneumonia events. Overall absolute rate of pneumonia was similar in patients with coeliac disease and controls: 3.42 and 3.12 per 1000 person-years respectively (HR 1.07, 95% CI 0.91–1.24). However, we found a 28% increased risk of pneumonia in coeliac disease unvaccinated subjects compared to unvaccinated controls (HR 1.28, 95% CI 1.02–1.60). This increased risk was limited to those younger than 65, was highest around the time of diagnosis and was maintained for more than 5 years after diagnosis. Only 26.6% underwent vaccination after their coeliac disease diagnosis.
Conclusions: Unvaccinated patients with coeliac disease under the age of 65 have an excess risk of community-acquired pneumonia that was not found in vaccinated patients with coeliac disease. As only a minority of patients with coeliac disease are being vaccinated there is a missed opportunity to intervene to protect these patients from pneumonia
Ariel - Volume 4 Number 6
Editors
David A. Jacoby
Eugenia Miller
Tom Williams
Associate Editors
Paul Bialas
Terry Burt
Michael Leo
Gail Tenikat
Editor Emeritus and Business Manager
Richard J. Bonnano
Movie Editor
Robert Breckenridge
Staff
Richard Blutstein
Mary F. Buechler
J.D. Kanofsky
Rocket Weber
David Maye
Incidence and prevalence of celiac disease and dermatitis herpetiformis in the UK over two decades: population-based study
OBJECTIVES: Few studies have quantified the incidence and prevalence of celiac disease (CD) and dermatitis herpetiformis (DH) nationally and regionally by time and age groups. Understanding this epidemiology is crucial for hypothesizing about causes and quantifying the burden of disease. METHODS: Patients with CD or DH were identified in the Clinical Practice Research Datalink between 1990 and 2011. Incidence rates and prevalence were calculated by age, sex, year, and region of residence. Incidence rate ratios (IRR) adjusted for age, sex, and region were calculated with Poisson regression. RESULTS: A total of 9,087 incident cases of CD and 809 incident cases of DH were identified. Between 1990 and 2011, the incidence rate of CD increased from 5.2 per 100,000 (95% confidence interval (CI), 3.8-6.8) to 19.1 per 100,000 person-years (95% CI, 17.8-20.5; IRR, 3.6; 95% CI, 2.7-4.8). The incidence of DH decreased over the same time period from 1.8 per 100,000 to 0.8 per 100,000 person-years (average annual IRR, 0.96; 95% CI, 0.94-0.97). The absolute incidence of CD per 100,000 person-years ranged from 22.3 in Northern Ireland to 10 in London. There were large regional variations in prevalence for CD but not DH. CONCLUSIONS: We found a fourfold increase in the incidence of CD in the United Kingdom over 22 years, with large regional variations in prevalence. This contrasted with a 4% annual decrease in the incidence of DH, with minimal regional variations in prevalence. These contrasts could reflect differences in diagnosis between CD (serological diagnosis and case finding) and DH (symptomatic presentation) or the possibility that diagnosing and treating CD prevents the development of DH
A Design Guide for Open Online Courses
This guide is a comprehensive summary of how we went about creating Citizen Maths, an open online maths course and service.
The guide shares our design principles and the techniques we used to put them into practice.
Our aim is to provide – with the appropriate ‘translation’ – a resource that will be useful to to other teams who are developing online education initiatives
- …
