562 research outputs found

    A Novel Chronic Disease Policy Model

    Full text link
    We develop a simulation tool to support policy-decisions about healthcare for chronic diseases in defined populations. Incident disease-cases are generated in-silico from an age-sex characterised general population using standard epidemiological approaches. A novel disease-treatment model then simulates continuous life courses for each patient using discrete event simulation. Ideally, the discrete event simulation model would be inferred from complete longitudinal healthcare data via a likelihood or Bayesian approach. Such data is seldom available for relevant populations, therefore an innovative approach to evidence synthesis is required. We propose a novel entropy-based approach to fit survival densities. This method provides a fully flexible way to incorporate the available information, which can be derived from arbitrary sources. Discrete event simulation then takes place on the fitted model using a competing hazards framework. The output is then used to help evaluate the potential impacts of policy options for a given population.Comment: 24 pages, 13 figures, 11 table

    Human activity recognition from inertial sensor time-series using batch normalized deep LSTM recurrent networks

    Get PDF
    In recent years machine learning methods for human activity recognition have been found very effective. These classify discriminative features generated from raw input sequences acquired from body-worn inertial sensors. However, it involves an explicit feature extraction stage from the raw data, and although human movements are encoded in a sequence of successive samples in time most state-of-the-art machine learning methods do not exploit the temporal correlations between input data samples. In this paper we present a Long-Short Term Memory (LSTM) deep recurrent neural network for the classification of six daily life activities from accelerometer and gyroscope data. Results show that our LSTM can processes featureless raw input signals, and achieves 92 % average accuracy in a multi-class-scenario. Further, we show that this accuracy can be achieved with almost four times fewer training epochs by using a batch normalization approach

    Outcome-sensitive multiple imputation: a simulation study.

    Get PDF
    BACKGROUND: Multiple imputation is frequently used to deal with missing data in healthcare research. Although it is known that the outcome should be included in the imputation model when imputing missing covariate values, it is not known whether it should be imputed. Similarly no clear recommendations exist on: the utility of incorporating a secondary outcome, if available, in the imputation model; the level of protection offered when data are missing not-at-random; the implications of the dataset size and missingness levels. METHODS: We used realistic assumptions to generate thousands of datasets across a broad spectrum of contexts: three mechanisms of missingness (completely at random; at random; not at random); varying extents of missingness (20-80% missing data); and different sample sizes (1,000 or 10,000 cases). For each context we quantified the performance of a complete case analysis and seven multiple imputation methods which deleted cases with missing outcome before imputation, after imputation or not at all; included or did not include the outcome in the imputation models; and included or did not include a secondary outcome in the imputation models. Methods were compared on mean absolute error, bias, coverage and power over 1,000 datasets for each scenario. RESULTS: Overall, there was very little to separate multiple imputation methods which included the outcome in the imputation model. Even when missingness was quite extensive, all multiple imputation approaches performed well. Incorporating a secondary outcome, moderately correlated with the outcome of interest, made very little difference. The dataset size and the extent of missingness affected performance, as expected. Multiple imputation methods protected less well against missingness not at random, but did offer some protection. CONCLUSIONS: As long as the outcome is included in the imputation model, there are very small performance differences between the possible multiple imputation approaches: no outcome imputation, imputation or imputation and deletion. All informative covariates, even with very high levels of missingness, should be included in the multiple imputation model. Multiple imputation offers some protection against a simple missing not at random mechanism

    Tilting the lasso by knowledge-based post-processing

    Get PDF
    Background It is useful to incorporate biological knowledge on the role of genetic determinants in predicting an outcome. It is, however, not always feasible to fully elicit this information when the number of determinants is large. We present an approach to overcome this difficulty. First, using half of the available data, a shortlist of potentially interesting determinants are generated. Second, binary indications of biological importance are elicited for this much smaller number of determinants. Third, an analysis is carried out on this shortlist using the second half of the data. Results We show through simulations that, compared with adaptive lasso, this approach leads to models containing more biologically relevant variables, while the prediction mean squared error (PMSE) is comparable or even reduced. We also apply our approach to bone mineral density data, and again final models contain more biologically relevant variables and have reduced PMSEs. Conclusion Our method leads to comparable or improved predictive performance, and models with greater face validity and interpretability with feasible incorporation of biological knowledge into predictive models

    Access and non–access site bleeding after percutaneous coronary intervention and risk of subsequent mortality and major adverse cardiovascular events:Systematic review and meta-analysis

    Get PDF
    Background: The prognostic impact of site-specific major bleeding complications after percutaneous coronary intervention (PCI) has yielded conflicting data. The aim of this study is to provide an overview of site-specific major bleeding events in contemporary PCI and study their impact on mortality and major adverse cardiovascular event outcomes. Methods and Results: We conducted a meta-analysis of PCI studies that evaluated site-specific periprocedural bleeding complications and their impact on major adverse cardiovascular events and mortality outcomes. A systematic search of MEDLINE and Embase was conducted to identify relevant studies and random effects meta-analysis was used to estimate the risk of adverse outcomes with site-specific bleeding complications. Twenty-five relevant studies including 2 400 645 patients that underwent PCI were identified. Both non–access site (risk ratio [RR], 4.06; 95% confidence interval [CI], 3.21–5.14) and access site (RR, 1.71; 95% CI, 1.37–2.13) related bleeding complications were independently associated with an increased risk of periprocedural mortality. The prognostic impact of non–access site–related bleeding events on mortality related to the source of anatomic bleeding, for example, gastrointestinal RR, 2.78; 95% CI, 1.25 to 6.18; retroperitoneal RR, 5.87; 95% CI, 1.63 to 21.12; and intracranial RR, 22.71; 95% CI, 12.53 to 41.15. Conclusions: The prognostic impact of bleeding complications after PCI varies according to anatomic source and severity. Non–access site-related bleeding complications have a similar prevalence to those from the access site but are associated with a significantly worse prognosis partly related to the severity of the bleed. Clinicians should minimize the risk of major bleeding complications during PCI through judicious use of bleeding avoidance strategies irrespective of the access site used

    Long-term glycemic variability and risk of adverse outcomes: a systematic review and meta-analysis

    Get PDF
    OBJECTIVE: Glycemic variability is emerging as a measure of glycemic control, which may be a reliable predictor of complications. This systematic review and meta-analysis evaluates the association between HbA1c variability and micro- and macrovascular complications and mortality in type 1 and type 2 diabetes. RESEARCH DESIGN AND METHODS: Medline and Embase were searched (2004–2015) for studies describing associations between HbA1c variability and adverse outcomes in patients with type 1 and type 2 diabetes. Data extraction was performed independently by two reviewers. Random-effects meta-analysis was performed with stratification according to the measure of HbA1c variability, method of analysis, and diabetes type. RESULTS: Seven studies evaluated HbA1c variability among patients with type 1 diabetes and showed an association of HbA1c variability with renal disease (risk ratio 1.56 [95% CI 1.08–2.25], two studies), cardiovascular events (1.98 [1.39–2.82]), and retinopathy (2.11 [1.54–2.89]). Thirteen studies evaluated HbA1c variability among patients with type 2 diabetes. Higher HbA1c variability was associated with higher risk of renal disease (1.34 [1.15–1.57], two studies), macrovascular events (1.21 [1.06–1.38]), ulceration/gangrene (1.50 [1.06–2.12]), cardiovascular disease (1.27 [1.15–1.40]), and mortality (1.34 [1.18–1.53]). Most studies were retrospective with lack of adjustment for potential confounders, and inconsistency existed in the definition of HbA1c variability. CONCLUSIONS: HbA1c variability was positively associated with micro- and macrovascular complications and mortality independently of the HbA1c level and might play a future role in clinical risk assessment
    corecore