314 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Piecing the puzzle together : enhancing the quality of road trauma surveillance through linkage of police and health data

    Get PDF
    This program of research linked police and health data collections to investigate the potential benefits for road safety in terms of enhancing the quality of data. This research has important implications for road safety because, although police collected data has historically underpinned efforts in the area, it is known that many road crashes are not reported to police and that these data lack specific injury severity information. This research shows that data linkage provides a more accurate quantification of the severity and prevalence of road crash injuries which is essential for: prioritising funding; targeting interventions; and estimating the burden and cost of road trauma

    Examining the epidemiology of tuberculosis in migrants to the UK to inform evidence-based screening policies

    Get PDF
    Background: In high-income countries an increasing proportion of all tuberculosis cases are detected in migrants. Understanding the epidemiology of tuberculosis in migrants to inform evidence-based screening policies is a priority. Methods: A systematic review and meta-analysis of pre-entry screening for tuberculosis was undertaken (chapter 2). Data from a pilot pre-entry programme in migrants to the UK was described, and risk factors for prevalent cases examined (chapter 3). The accuracy of a novel method for identifying individuals between two datasets was studied (chapter 4). This linkage method was used to combine data from migrants screened pre-entry to the UK tuberculosis register including molecular strain typing data. The linked datasets enabled estimates of the incidence of tuberculosis to be calculated, and risk factors were identified (chapters 5 and 6). Results: The systematic review identified 15 studies and found that culture confirmation increased with WHO prevalence in the country of origin. The crude prevalence of bacteriologically confirmed tuberculosis identified by UK pre-entry screening was 92 per 100,000 population screened. Migrants reporting a history of contact with a case of tuberculosis, and those from higher prevalence countries were at greatest risk. Compared to a gold standard of NHS number, probabilistic linkage identified individuals in two datasets with high sensitivity and specificity. The estimated incidence of tuberculosis notified in the UK in migrants screened pre-entry was 194 per 100,000 person years at risk. Migrants with a chest radiograph classified as suspected tuberculosis and those from higher prevalence countries had a higher risk post- migration. Compared to other non-UK born individuals, migrants screened pre-entry were less likely to be the first case in a cluster of tuberculosis. Conclusions: This thesis generated new knowledge that improves our understanding of the epidemiology of tuberculosis in migrants to the UK. Based on these findings, evidence-based screening recommendations were made

    NIOSH practices in occupational risk assessment

    Get PDF
    "Exposure to on-the-job health hazards is a problem faced by workers worldwide. Unlike safety hazards that may lead to injury, health hazards can lead to various types of illness. For example, exposures to some chemicals used in work processes may cause immediate sensory irritation (e.g., stinging or burning eyes, dry throat, cough); in other cases, workplace chemicals may cause cancer in workers many years after exposure. There are millions of U.S. workers exposed to chemicals in their work each year. In order to make recommendations for working safely in the presence of chemical hazards, the National Institute for Occupational Safety and Health (NIOSH) conducts risk assessments. In simple terms, risk assessment is a way of relating a hazard, like a toxic chemical in the air, to potential health risks associated with exposure to that hazard. Risk assessment allows NIOSH to make recommendations for controlling exposures in the workplace to reduce health risks. This document describes the process and logic NIOSH uses to conduct risk assessments, including the following steps: 1) Determining what type of hazard is associated with a chemical or other agent; 2) Collating the scientific evidence indicating whether the chemical or other agent causes illness or injury; 3) Evaluating the scientific data and determining how much exposure to the chemical or other agent would be harmful to workers; and 4) Carefully considering all relevant evidence to make the best, scientifically supported decisions. NIOSH researchers publish risk assessments in peer-reviewed scientific journals and in NIOSH-numbered documents. NIOSH-numbered publications also provide recommendations aimed to improve worker safety and health that stem from risk assessment." NIOSHTIC-2NIOSHTIC no. 20058767Suggested citation: NIOSH [2019]. Current intelligence bulletin 69: NIOSH practices in occupational risk assessment. By Daniels RD, Gilbert SJ, Kuppusamy SP, Kuempel ED, Park RM, Pandalai SP, Smith RJ, Wheeler MW, Whittaker C, Schulte PA. Cincinnati, OH: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health. DHHS (NIOSH) Publication No. 2020-106, https://doi.org/10.26616/NIOSHPUB20201062020-106.pdf?id=10.26616/NIOSHPUB2020106202010.26616/NIOSHPUB2020106728

    Population-Adjusted Indirect Treatment Comparisons with Limited Access to Patient-Level Data

    Get PDF
    Health technology assessment systems base their decision-making on health-economic evaluations. These require accurate relative treatment effect estimates for specific patient populations. In an ideal scenario, a head-to-head randomized controlled trial, directly comparing the interventions of interest, would be available. Indirect treatment comparisons are necessary to contrast treatments which have not been analyzed in the same trial. Population-adjusted indirect comparisons estimate treatment effects where there are: no head-to-head trials between the interventions of interest, limited access to patient-level data, and cross-trial differences in effect measure modifiers. Health technology assessment agencies are increasingly accepting evaluations that use these methods across a diverse range of therapeutic areas. Popular approaches include matching-adjusted indirect comparison (MAIC), based on propensity score weighting, and simulated treatment comparison (STC), based on outcome regression. There is limited formal evaluation of these methods and whether they can be used to accurately compare treatments. Thus, I undertake a review and a simulation study that compares the standard unadjusted indirect comparisons, MAIC and STC across 162 scenarios. This simulation study assumes that the trials are investigating survival outcomes and measure continuous covariates, with the log hazard ratio as the measure of effect — one of the most widely used setups in health technology assessment applications. MAIC yields unbiased treatment effect estimates under no failures of assumptions. The typical usage of STC produces bias because it targets a conditional treatment effect where the target estimand should be a marginal treatment effect. The incompatibility of estimates in the indirect comparison leads to bias as the measure of effect is non-collapsible. When adjusting for covariates, one must integrate or average the conditional model over the population of interest to recover a compatible marginal treatment effect. I propose a marginalization method based on parametric G-computation that can be easily applied where the outcome regression is a generalized linear model or a Cox model. In addition, I introduce a novel general-purpose method based on the ideas underlying multiple imputation, which is termed multiple imputation marginalization (MIM) and is applicable to a wide range of models, including parametric survival models. The approaches view the covariate adjustment regression as a nuisance model and separate its estimation from the evaluation of the marginal treatment effect of interest. Both methods can accommodate a Bayesian statistical framework, which naturally integrates the analysis into a probabilistic framework, typically required for health technology assessment. Another simulation study provides proof-of-principle for the methods and benchmarks their performance against MAIC and the conventional STC. The simulations are based on scenarios with binary outcomes and continuous covariates, with the log-odds ratio as the measure of effect. The marginalized outcome regression approaches achieve more precise and more accurate estimates than MAIC, particularly when covariate overlap is poor, and yield unbiased marginal treatment effect estimates under no failures of assumptions. Furthermore, regressionadjusted estimates of the marginal effect provide greater precision and accuracy than the conditional estimates produced by the conventional STC, which are systematically biased because the log-odds ratio is a non-collapsible measure of effect. The marginalization methods outlined in this thesis are necessary and important for health technology assessment more generally, because marginal treatment effects should be the preferred inferential target for reimbursement decisions at the population level. Treatment effectiveness inputs in health economic models are often informed by the treatment coefficient of a multivariable regression. An often overlooked issue is that this has a conditional interpretation, and that the coefficients of the regression must be marginalized over the target population of interest to produce a relevant estimate for reimbursement decisions at the population level
    • …
    corecore