351 research outputs found

    A Novel Chronic Disease Policy Model

    Full text link
    We develop a simulation tool to support policy-decisions about healthcare for chronic diseases in defined populations. Incident disease-cases are generated in-silico from an age-sex characterised general population using standard epidemiological approaches. A novel disease-treatment model then simulates continuous life courses for each patient using discrete event simulation. Ideally, the discrete event simulation model would be inferred from complete longitudinal healthcare data via a likelihood or Bayesian approach. Such data is seldom available for relevant populations, therefore an innovative approach to evidence synthesis is required. We propose a novel entropy-based approach to fit survival densities. This method provides a fully flexible way to incorporate the available information, which can be derived from arbitrary sources. Discrete event simulation then takes place on the fitted model using a competing hazards framework. The output is then used to help evaluate the potential impacts of policy options for a given population.Comment: 24 pages, 13 figures, 11 table

    Human activity recognition from inertial sensor time-series using batch normalized deep LSTM recurrent networks

    Get PDF
    In recent years machine learning methods for human activity recognition have been found very effective. These classify discriminative features generated from raw input sequences acquired from body-worn inertial sensors. However, it involves an explicit feature extraction stage from the raw data, and although human movements are encoded in a sequence of successive samples in time most state-of-the-art machine learning methods do not exploit the temporal correlations between input data samples. In this paper we present a Long-Short Term Memory (LSTM) deep recurrent neural network for the classification of six daily life activities from accelerometer and gyroscope data. Results show that our LSTM can processes featureless raw input signals, and achieves 92 % average accuracy in a multi-class-scenario. Further, we show that this accuracy can be achieved with almost four times fewer training epochs by using a batch normalization approach

    A Disability Rights Approach to a Constitutional Right to Housing

    Get PDF
    This article discusses the potential value of a Constitutional Right to Housing in Ireland to the realisation of independent living for disabled people. Article 19 of the United Nations Convention on the Rights of Persons with Disabilities promotes the right of choice over where and with whom to live, the right to access supports to realise your choices and to participate equally in society. Ireland’s historic system of institutionalising disabled people is being slowly dismantled, but equally challenging is the lack of available, accessible, and adequate housing in the community. This article outlines how the live issues of defective construction materials, inaccessibility of housing, the invisibility of disabled people’s homelessness, and ongoing institutionalisation are preventing full Article 19 UN CRPD realisation. It will suggest how embedding housing as a Constitutional right will contribute to alleviating these rights violations

    Outcome-sensitive multiple imputation: a simulation study.

    Get PDF
    BACKGROUND: Multiple imputation is frequently used to deal with missing data in healthcare research. Although it is known that the outcome should be included in the imputation model when imputing missing covariate values, it is not known whether it should be imputed. Similarly no clear recommendations exist on: the utility of incorporating a secondary outcome, if available, in the imputation model; the level of protection offered when data are missing not-at-random; the implications of the dataset size and missingness levels. METHODS: We used realistic assumptions to generate thousands of datasets across a broad spectrum of contexts: three mechanisms of missingness (completely at random; at random; not at random); varying extents of missingness (20-80% missing data); and different sample sizes (1,000 or 10,000 cases). For each context we quantified the performance of a complete case analysis and seven multiple imputation methods which deleted cases with missing outcome before imputation, after imputation or not at all; included or did not include the outcome in the imputation models; and included or did not include a secondary outcome in the imputation models. Methods were compared on mean absolute error, bias, coverage and power over 1,000 datasets for each scenario. RESULTS: Overall, there was very little to separate multiple imputation methods which included the outcome in the imputation model. Even when missingness was quite extensive, all multiple imputation approaches performed well. Incorporating a secondary outcome, moderately correlated with the outcome of interest, made very little difference. The dataset size and the extent of missingness affected performance, as expected. Multiple imputation methods protected less well against missingness not at random, but did offer some protection. CONCLUSIONS: As long as the outcome is included in the imputation model, there are very small performance differences between the possible multiple imputation approaches: no outcome imputation, imputation or imputation and deletion. All informative covariates, even with very high levels of missingness, should be included in the multiple imputation model. Multiple imputation offers some protection against a simple missing not at random mechanism

    Informative observation in health data: Association of past level and trend with time to next measurement

    Get PDF
    In routine health data, risk factors and biomarkers are typically measured irregularly in time, with the frequency of their measurement depending on a range of factors – for example, sicker patients are measured more often. This is termed informative observation. Failure to account for this in subsequent modelling can lead to bias. Here, we illustrate this issue using body mass index measurements taken on patients with type 2 diabetes in Salford, UK. We modelled the observation process (time to next measurement) as a recurrent event Cox model, and studied whether previous measurements in BMI, and trends in the BMI, were associated with changes in the frequency of measurement. Interestingly, we found that increasing BMI led to a lower propensity for future measurements. More broadly, this illustrates the need and opportunity to develop and apply models that account for, and exploit, informative observation

    Tilting the lasso by knowledge-based post-processing

    Get PDF
    Background It is useful to incorporate biological knowledge on the role of genetic determinants in predicting an outcome. It is, however, not always feasible to fully elicit this information when the number of determinants is large. We present an approach to overcome this difficulty. First, using half of the available data, a shortlist of potentially interesting determinants are generated. Second, binary indications of biological importance are elicited for this much smaller number of determinants. Third, an analysis is carried out on this shortlist using the second half of the data. Results We show through simulations that, compared with adaptive lasso, this approach leads to models containing more biologically relevant variables, while the prediction mean squared error (PMSE) is comparable or even reduced. We also apply our approach to bone mineral density data, and again final models contain more biologically relevant variables and have reduced PMSEs. Conclusion Our method leads to comparable or improved predictive performance, and models with greater face validity and interpretability with feasible incorporation of biological knowledge into predictive models

    A review of statistical updating methods for clinical prediction models

    Get PDF
    A clinical prediction model (CPM) is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new CPM for each population and context, however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing CPMs already developed for use in similar contexts or populations. In addition, CPMs commonly become miscalibrated over time, and need replacing or updating. In this paper we review a range of approaches for re-using and updating CPMs; these fall in three main categories: simple coefficient updating; combining multiple previous CPMs in a meta-model; and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the UK: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing CPMs to a new population or context, and these should be implemented rather than developing a new CPM from scratch, using a breadth of complementary statistical methods
    corecore