57 research outputs found

    Effect of nutrition survey 'cleaning criteria' on estimates of malnutrition prevalence and disease burden: secondary data analysis.

    Get PDF
    Tackling childhood malnutrition is a global health priority. A key indicator is the estimated prevalence of malnutrition, measured by nutrition surveys. Most aspects of survey design are standardised, but data 'cleaning criteria' are not. These aim to exclude extreme values which may represent measurement or data-entry errors. The effect of different cleaning criteria on malnutrition prevalence estimates was unknown. We applied five commonly used data cleaning criteria (WHO 2006; EPI-Info; WHO 1995 fixed; WHO 1995 flexible; SMART) to 21 national Demographic and Health Survey datasets. These included a total of 163,228 children, aged 6-59 months. We focused on wasting (low weight-for-height), a key indicator for treatment programmes. Choice of cleaning criteria had a marked effect: SMART were least inclusive, resulting in the lowest reported malnutrition prevalence, while WHO 2006 were most inclusive, resulting in the highest. Across the 21 countries, the proportion of records excluded was 3 to 5 times greater when using SMART compared to WHO 2006 criteria, resulting in differences in the estimated prevalence of total wasting of between 0.5 and 3.8%, and differences in severe wasting of 0.4-3.9%. The magnitude of difference was associated with the standard deviation of the survey sample, a statistic that can reflect both population heterogeneity and data quality. Using these results to estimate case-loads for treatment programmes resulted in large differences for all countries. Wasting prevalence and caseload estimations are strongly influenced by choice of cleaning criterion. Because key policy and programming decisions depend on these statistics, variations in analytical practice could lead to inconsistent and potentially inappropriate implementation of malnutrition treatment programmes. We therefore call for mandatory reporting of cleaning criteria use so that results can be compared and interpreted appropriately. International consensus is urgently needed regarding choice of criteria to improve the comparability of nutrition survey data

    Working by Committee: Formal and Informal Assessment Collaborations--Assessment Committees and Beyond

    Get PDF
    During this panel discussion, three very different academic libraries will provide perspectives about the role of an assessment committee and the assessment librarian in library strategic planning. Topics discussed will include: The creation of the Assessment Committee (AC) and the role of the Assessment Librarian within the committee Strategic planning for the committee as well how assessment projects are chosen for the committee and for the library The current and past projects of the AC As time permits, the panel will also discuss: How librarians outside of the committee use the AC and Assessment or U/X Librarian Ways in which libraries can get assessment projects started including small scale training and initiatives Working with the Office of Research (OIR) within the University The Lib-Value initiative and the need to demonstrate library value How assessment provides evidence for overall strategic planning for the librar

    A novel approach to evaluating the UK childhood immunisation schedule: estimating the effective coverage vector across the entire vaccine programme

    Get PDF
    BACKGROUND: The availability of new vaccines can prompt policy makers to consider changes to the routine childhood immunisation programme in the UK. Alterations to one aspect of the schedule may have implications for other areas of the programme (e.g. adding more injections could reduce uptake of vaccines featuring later in the schedule). Colleagues at the Department of Health (DH) in the UK therefore wanted to know whether assessing the impact across the entire programme of a proposed change to the UK schedule could lead to different decisions than those made on the current case-by-case basis. This work is a first step towards addressing this question. METHODS: A novel framework for estimating the effective coverage against all of the diseases within a vaccination programme was developed. The framework was applied to the current (August 2015) UK childhood immunisation programme, plausible extensions to it in the foreseeable future (introducing vaccination against Meningitis B and/or Hepatitis B) and a "what-if" scenario regarding a Hepatitis B vaccine scare that was developed in close collaboration with DH. RESULTS: Our applications of the framework demonstrate that a programme-view of hypothetical changes to the schedule is important. For example, we show how introducing Hepatitis B vaccination could negatively impact aspects of the current programme by reducing uptake of vaccines featuring later in the schedule, and illustrate that the potential benefits of introducing any new vaccine are susceptible to behaviour changes affecting uptake (e.g. a vaccine scare). We show how it may be useful to consider the potential benefits and scheduling needs of all vaccinations on the horizon of interest rather than those of an individual vaccine in isolation, e.g. how introducing Meningitis B vaccination could saturate the early (2-month) visit, thereby potentially restricting scheduling options for Hepatitis B immunisation should it be introduced to the programme in the future. CONCLUSIONS: Our results demonstrate the potential benefit of considering the programme-wide impact of changes to an immunisation schedule, and our framework is an important step in the development of a means for systematically doing so

    Informing the management of pediatric heart transplant waiting lists:complementary use of simulation and analytical modeling

    Get PDF
    A clinical intervention known as `bridging to transplant', in which a patient is placed on life-sustaining support, can be used to increase the chance of an individual surviving until a donor heart becomes available. However, the impact of this on other patients on the waiting list and the wider implications for the resourcing of cardiac units remains unclear. Initial insights have previously been generated using a birth-death queuing model, but this model did not incorporate realistic donor-recipient assumptions regarding blood type and weight. Here we report on a complementary simulation study that examined how estimates from the analytical model might change if organ matching were better taken into account. Simulation results showed that system metrics changed substantially when recipient donor compatibility was modelled. However, the effects of blood type compatibility were countered by that of weight compatibility and when combined, these have a relatively small net effect on results

    Transfer of congenital heart patients from paediatric to adult services in England

    Get PDF
    OBJECTIVE: This study assessed the transfer of patients from paediatric cardiac to adult congenital heart disease (ACHD) services in England and the factors impacting on this process. METHODS: This retrospective cohort study used a population-based linked data set (LAUNCHES QI data set: 'Linking Audit and National datasets in Congenital Heart Services for Quality Improvement') including all patients born between 1987 and 2000, recorded as having a congenital heart disease (CHD) procedure in childhood. Hospital Episode Statistics data identified transfer from paediatric to ACHD services between the ages of 16 and 22 years. RESULTS: Overall, 63.8% of a cohort of 10 298 patients transferred by their 22nd birthday. The estimated probability of transfer by age 22 was 96.5% (95% CI 95.3 to 97.7), 86.7% (95% CI 85.6 to 87.9) and 41.0% (95% CI 39.4 to 42.6) for severe, moderate and mild CHD, respectively. 166 patients (1.6%) died between 16 and 22 years; 42 of these (0.4%) died after age 16 but prior to transfer. Multivariable ORs in the moderate and severe CHD groups up to age 20 showed significantly lower likelihood of transfer among female patients (0.87, 95% CI 0.78 to 0.97), those with missing ethnicity data (0.31, 95% CI 0.18 to 0.52), those from deprived areas (0.84, 95% CI 0.72 to 0.98) and those with moderate (compared with severe) CHD (0.30, 95% CI 0.26 to 0.35). The odds of transfer were lower for the horizontal compared with the vertical care model (0.44, 95% CI 0.27 to 0.72). Patients who did not transfer had a lower probability of a further National Congenital Heart Disease Audit procedure between ages 20 and 30 compared with those who did transfer: 12.3% (95% CI 5.1 to 19.6) vs 32.5% (95% CI 28.7 to 36.3). CONCLUSIONS: Majority of patients with moderate or severe CHD in England transfer to adult services. Patients who do not transfer undergo fewer elective CHD procedures over the following decade

    Machine learning for real-time aggregated prediction of hospital admission for emergency patients

    Get PDF
    Machine learning for hospital operations is under-studied. We present a prediction pipeline that uses live electronic health-records for patients in a UK teaching hospital's emergency department (ED) to generate short-term, probabilistic forecasts of emergency admissions. A set of XGBoost classifiers applied to 109,465 ED visits yielded AUROCs from 0.82 to 0.90 depending on elapsed visit-time at the point of prediction. Patient-level probabilities of admission were aggregated to forecast the number of admissions among current ED patients and, incorporating patients yet to arrive, total emergency admissions within specified time-windows. The pipeline gave a mean absolute error (MAE) of 4.0 admissions (mean percentage error of 17%) versus 6.5 (32%) for a benchmark metric. Models developed with 104,504 later visits during the Covid-19 pandemic gave AUROCs of 0.68-0.90 and MAE of 4.2 (30%) versus a 4.9 (33%) benchmark. We discuss how we surmounted challenges of designing and implementing models for real-time use, including temporal framing, data preparation, and changing operational conditions

    Arterial Switch for Transposition of the Great Arteries

    Get PDF
    Background Reports of long-term mortality and reintervention after transposition of the great arteries with intact ventricular septum treatment, although favorable, are mostly limited to single-center studies. Even less is known about hospital resource utilization (days at hospital) and the impact of treatment choices and timing on outcomes. Objectives The purpose of this study was to describe survival, reintervention and hospital resource utilization after arterial switch operation (ASO) in a national dataset. Methods Follow-up and life status data for all patients undergoing ASO between 2000 and 2017 in England and Wales were collected and explored using multivariable regressions and matching. Results A total of 1,772 patients were identified, with median ASO age of 9.5 days (IQR: 6.5-14.5 days). Mortality and cardiac reintervention at 10 years after ASO were 3.2% (95% CI: 2.5%-4.2%) and 10.7% (95% CI: 9.1%-12.2%), respectively. The median time spent in hospital during the ASO spell was 19 days (IQR: 14, 24). Over the first year after the ASO patients spent 7 days (IQR: 4-10 days) in hospital in total, decreasing to 1 outpatient day/year beyond the fifth year. In a subgroup with complete risk factor data (n = 652), ASO age, and balloon atrial septostomy (BAS) use were not associated with late mortality and reintervention, but cardiac or congenital comorbidities, low weight, and circulatory/renal support at ASO were. After matching for patient characteristics, BAS followed by ASO and ASO as first procedure, performed within the first 3 weeks of life, had comparable early and late outcomes, including hospital resource utilization. Conclusions Mortality and hospital resource utilization are low, while reintervention remains relatively frequent. Early ASO and individualized use of BAS allows for flexibility in treatment choices and a focus on at-risk patients
    corecore