9,938 research outputs found

    Basal Insulin Regimens for Adults with Type 1 Diabetes Mellitus : A Cost-Utility Analysis

    Get PDF
    Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.OBJECTIVES: To assess the cost-effectiveness of basal insulin regimens for adults with type 1 diabetes mellitus in England. METHODS: A cost-utility analysis was conducted in accordance with the National Institute for Health and Care Excellence reference case. The UK National Health Service and personal and social services perspective was used and a 3.5% discount rate was applied for both costs and outcomes. Relative effectiveness estimates were based on a systematic review of published trials and a Bayesian network meta-analysis. The IMS CORE Diabetes Model was used, in which net monetary benefit (NMB) was calculated using a threshold of £20,000 per quality-adjusted life-year (QALY) gained. A wide range of sensitivity analyses were conducted. RESULTS: Insulin detemir (twice daily) [iDet (bid)] had the highest mean QALY gain (11.09 QALYs) and NMB (£181,456) per patient over the model time horizon. Compared with the lowest cost strategy (insulin neutral protamine Hagedorn once daily), it had an incremental cost-effectiveness ratio of £7844/QALY gained. Insulin glargine (od) [iGlarg (od)] and iDet (od) were ranked as second and third, with NMBs of £180,893 and £180,423, respectively. iDet (bid) remained the most cost-effective treatment in all the sensitivity analyses performed except when high doses were assumed (>30% increment compared with other regimens), where iGlarg (od) ranked first. CONCLUSIONS: iDet (bid) is the most cost-effective regimen, providing the highest QALY gain and NMB. iGlarg (od) and iDet (od) are possible options for those for whom the iDet (bid) regimen is not acceptable or does not achieve required glycemic control.Peer reviewe

    Security and confidentiality approach for the Clinical E-Science Framework (CLEF)

    Get PDF
    CLEF is an MRC sponsored project in the E-Science programme that aims to establish policies and infrastructure for the next generation of integrated clinical and bioscience research. One of the major goals of the project is to provide a pseudonymised repository of histories of cancer patients that can be accessed by researchers. Robust mechanisms and policies are needed to ensure that patient privacy and confidentiality are preserved while delivering a repository of such medically rich information for the purposes of scientific research. This paper summarises the overall approach adopted by CLEF to meet data protection requirements, including the data flows and pseudonymisation mechanisms that are currently being developed. Intended constraints and monitoring policies that will apply to research interrogation of the repository are also outlined. Once evaluated, it is hoped that the CLEF approach can serve as a model for other distributed electronic health record repositories to be accessed for research

    Promoting Health for Chronic Conditions: a Novel Approach that integrates Clinical and Personal Decision Support

    Get PDF
    Direct and indirect economic costs related to chronic diseases are increasing in Europe due to the aging of population. One of the most challenging goals is to improve the quality of life of patients affected by chronic conditions, and enhance their self-management. In this paper, we propose a novel architecture of a scalable solution, based on mobile tools, aimed to keep patients with chronic diseases away from acute episodes, to improve their quality of life and, consequently, to reduce their economic impact. Our solution aims to provide patients with a personalized tool for improving self-management, and it supports both patients and clinicians in decision-making through the implementation of two different Decision Support Systems. Moreover, the proposed architecture takes into account the interoperability and, particularly, the compliance with data transfer protocols (e.g., BT4/LE, ANT+, ISO/IEEE 11073) to ensure integration with existing devices, and with the semantic web approaches and standards related to the content and structure of the information (e.g., HL7, ICD-10 and openEHR) to ensure correct sharing of information with hospital information systems, and classification of patient behaviors (Coelition). The solution will be implemented and validated in future study

    The influence of patient's age on clinical decision-making about coronary heart disease in the USA and the UK

    Get PDF
    This paper examines UK and US primary care doctors' decision-making about older (aged 75 years) and midlife (aged 55 years) patients presenting with coronary heart disease (CHD). Using an analytic approach based on conceptualising clinical decision-making as a classification process, it explores the ways in which doctors' cognitive processes contribute to ageism in health-care at three key decision points during consultations. In each country, 56 randomly selected doctors were shown videotaped vignettes of actors portraying patients with CHD. The patients' ages (55 or 75 years), gender, ethnicity and social class were varied systematically. During the interviews, doctors gave free-recall accounts of their decision-making. The results do not establish that there was substantial ageism in the doctors' decisions, but rather suggest that diagnostic processes pay insufficient attention to the significance of older patients' age and its association with the likelihood of co-morbidity and atypical disease presentations. The doctors also demonstrated more limited use of ‘knowledge structures’ when diagnosing older than midlife patients. With respect to interventions, differences in the national health-care systems rather than patients' age accounted for the differences in doctors' decisions. US doctors were significantly more concerned about the potential for adverse outcomes if important diagnoses were untreated, while UK general practitioners cited greater difficulty in accessing diagnostic tests

    Designing privacy for scalable electronic healthcare linkage

    Get PDF
    A unified electronic health record (EHR) has potentially immeasurable benefits to society, and the current healthcare industry drive to create a single EHR reflects this. However, adoption is slow due to two major factors: the disparate nature of data and storage facilities of current healthcare systems and the security ramifications of accessing and using that data and concerns about potential misuse of that data. To attempt to address these issues this paper presents the VANGUARD (Virtual ANonymisation Grid for Unified Access of Remote Data) system which supports adaptive security-oriented linkage of disparate clinical data-sets to support a variety of virtual EHRs avoiding the need for a single schematic standard and natural concerns of data owners and other stakeholders on data access and usage. VANGUARD has been designed explicit with security in mind and supports clear delineation of roles for data linkage and usage

    Highdicom: A Python library for standardized encoding of image annotations and machine learning model outputs in pathology and radiology

    Full text link
    Machine learning is revolutionizing image-based diagnostics in pathology and radiology. ML models have shown promising results in research settings, but their lack of interoperability has been a major barrier for clinical integration and evaluation. The DICOM a standard specifies Information Object Definitions and Services for the representation and communication of digital images and related information, including image-derived annotations and analysis results. However, the complexity of the standard represents an obstacle for its adoption in the ML community and creates a need for software libraries and tools that simplify working with data sets in DICOM format. Here we present the highdicom library, which provides a high-level application programming interface for the Python programming language that abstracts low-level details of the standard and enables encoding and decoding of image-derived information in DICOM format in a few lines of Python code. The highdicom library ties into the extensive Python ecosystem for image processing and machine learning. Simultaneously, by simplifying creation and parsing of DICOM-compliant files, highdicom achieves interoperability with the medical imaging systems that hold the data used to train and run ML models, and ultimately communicate and store model outputs for clinical use. We demonstrate through experiments with slide microscopy and computed tomography imaging, that, by bridging these two ecosystems, highdicom enables developers to train and evaluate state-of-the-art ML models in pathology and radiology while remaining compliant with the DICOM standard and interoperable with clinical systems at all stages. To promote standardization of ML research and streamline the ML model development and deployment process, we made the library available free and open-source

    Creating longitudinal datasets and cleaning existing data identifiers in a cystic fibrosis registry using a novel Bayesian probabilistic approach from astronomy

    Get PDF
    Patient registry data are commonly collected as annual snapshots that need to be amalgamated to understand the longitudinal progress of each patient. However, patient identifiers can either change or may not be available for legal reasons when longitudinal data are collated from patients living in different countries. Here, we apply astronomical statistical matching techniques to link individual patient records that can be used where identifiers are absent or to validate uncertain identifiers. We adopt a Bayesian model framework used for probabilistically linking records in astronomy. We adapt this and validate it across blinded, annually collected data. This is a high-quality (Danish) sub-set of data held in the European Cystic Fibrosis Society Patient Registry (ECFSPR). Our initial experiments achieved a precision of 0.990 at a recall value of 0.987. However, detailed investigation of the discrepancies uncovered typing errors in 27 of the identifiers in the original Danish sub-set. After fixing these errors to create a new gold standard our algorithm correctly linked individual records across years achieving a precision of 0.997 at a recall value of 0.987 without recourse to identifiers. Our Bayesian framework provides the probability of whether a pair of records belong to the same patient. Unlike other record linkage approaches, our algorithm can also use physical models, such as body mass index curves, as prior information for record linkage. We have shown our framework can create longitudinal samples where none existed and validate pre-existing patient identifiers. We have demonstrated that in this specific case this automated approach is better than the existing identifiers

    Big Data in the Health Sector

    Get PDF

    Attacker Modelling in Ubiquitous Computing Systems

    Get PDF
    corecore