29 research outputs found

    Encrypted statistical machine learning: new privacy preserving methods

    Full text link
    We present two new statistical machine learning methods designed to learn on fully homomorphic encrypted (FHE) data. The introduction of FHE schemes following Gentry (2009) opens up the prospect of privacy preserving statistical machine learning analysis and modelling of encrypted data without compromising security constraints. We propose tailored algorithms for applying extremely random forests, involving a new cryptographic stochastic fraction estimator, and na\"{i}ve Bayes, involving a semi-parametric model for the class decision boundary, and show how they can be used to learn and predict from encrypted data. We demonstrate that these techniques perform competitively on a variety of classification data sets and provide detailed information about the computational practicalities of these and other FHE methods.Comment: 39 page

    kalis: a modern implementation of the Li & Stephens model for local ancestry inference in R

    Get PDF
    Background: Approximating the recent phylogeny of N phased haplotypes at a set of variants along the genome is a core problem in modern population genomics and central to performing genome-wide screens for association, selection, introgression, and other signals. The Li & Stephens (LS) model provides a simple yet powerful hidden Markov model for inferring the recent ancestry at a given variant, represented as an N×N distance matrix based on posterior decodings. Results: We provide a high-performance engine to make these posterior decodings readily accessible with minimal pre-processing via an easy to use package kalis, in the statistical programming language R. kalis enables investigators to rapidly resolve the ancestry at loci of interest and developers to build a range of variant-specific ancestral inference pipelines on top. kalis exploits both multi-core parallelism and modern CPU vector instruction sets to enable scaling to hundreds of thousands of genomes. Conclusions: The resulting distance matrices accessible via kalis enable local ancestry, selection, and association studies in modern large scale genomic datasets

    Modelling of modular battery systems under cell capacity variation and degradation

    Get PDF
    We propose a simple statistical model of electrochemical cell degradation based on the general characteristics observed in previous large-scale experimental studies of cell degradation. This model is used to statistically explore the behaviour and lifetime performance of battery systems where the cells are organised into modules that are controlled semi-independently. Intuitively, such systems should offer improved reliability and energy availability compared to monolithic systems as the system ages and cells degrade and fail. To validate this intuition, this paper explores the capacity evolution of populations of systems composed of random populations of cells. This approach allows the probability that a given system design meets a given lifetime specification to be calculated. A cost model that includes the effect of uncertainty in degradation behaviour is introduced and used to explore the cost-benefit trade-offs arising from the interaction of degradation and module size. Case studies of an electric vehicle battery pack and a grid-connected energy storage system are used to demonstrate the use of the model to find lifetime cost-optimum designs. It is observed that breaking a battery energy storage system up into smaller modules can lead to large increases in accessible system capacity and may lead to a decision to use lower-quality, lower-cost cells in a cost-optimum system

    Model updating after interventions paradoxically introduces bias

    Get PDF
    Machine learning is increasingly being used to generate prediction models for use in a number of real-world settings, from credit risk assessment to clinical decision support. Recent discussions have highlighted potential problems in the updating of a predictive score for a binary outcome when an existing predictive score forms part of the standard workflow, driving interventions. In this setting, the existing score induces an additional causative pathway which leads to miscalibration when the original score is replaced. We propose a general causal framework to describe and address this problem, and demonstrate an equivalent formulation as a partially observed Markov decision process. We use this model to demonstrate the impact of such `naive updating' when performed repeatedly. Namely, we show that successive predictive scores may converge to a point where they predict their own effect, or may eventually tend toward a stable oscillation between two values, and we argue that neither outcome is desirable. Furthermore, we demonstrate that even if model-fitting procedures improve, actual performance may worsen. We complement these findings with a discussion of several potential routes to overcome these issues.Comment: Sections of this preprint on 'Successive adjuvancy' (section 4, theorem 2, figures 4,5, and associated discussions) were not included in the originally submitted version of this paper due to length. This material does not appear in the published version of this manuscript, and the reader should be aware that these sections did not undergo peer revie

    Computable phenotype for real-world, data-driven retrospective identification of relapse in ANCA-associated vasculitis

    Get PDF
    Objective: ANCA-associated vasculitis (AAV) is a relapsing-remitting disease, resulting in incremental tissue injury. The gold-standard relapse definition (Birmingham Vasculitis Activity Score, BVAS>0) is often missing or inaccurate in registry settings, leading to errors in ascertainment of this key outcome. We sought to create a computable phenotype (CP) to automate retrospective identification of relapse using real-world data in the research setting.Methods: We studied 536 patients with AAV and >6 months follow-up recruited to the Rare Kidney Disease registry (a national longitudinal, multicentre cohort study). We followed five steps: (1) independent encounter adjudication using primary medical records to assign the ground truth, (2) selection of data elements (DEs), (3) CP development using multilevel regression modelling, (4) internal validation and (5) development of additional models to handle missingness. Cut-points were determined by maximising the F1-score. We developed a web application for CP implementation, which outputs an individualised probability of relapse.Results: Development and validation datasets comprised 1209 and 377 encounters, respectively. After classifying encounters with diagnostic histopathology as relapse, we identified five key DEs; DE1: change in ANCA level, DE2: suggestive blood/urine tests, DE3: suggestive imaging, DE4: immunosuppression status, DE5: immunosuppression change. F1-score, sensitivity and specificity were 0.85 (95% CI 0.77 to 0.92), 0.89 (95% CI 0.80 to 0.99) and 0.96 (95% CI 0.93 to 0.99), respectively. Where DE5 was missing, DE2 plus either DE1/DE3 were required to match the accuracy of BVAS.Conclusions: This CP accurately quantifies the individualised probability of relapse in AAV retrospectively, using objective, readily accessible registry data. This framework could be leveraged for other outcomes and relapsing diseases.Keywords: Classification; Epidemiology; Outcome Assessment, Health Care; Vasculitis

    Data quality and patient characteristics in European ANCA-associated vasculitis registries: data retrieval by federated querying

    Get PDF
    Objectives This study aims to describe the data structure and harmonisation process, explore data quality and define characteristics, treatment, and outcomes of patients across six federated antineutrophil cytoplasmic antibody-associated vasculitis (AAV) registries.Methods Through creation of the vasculitis-specific Findable, Accessible, Interoperable, Reusable, VASCulitis ontology, we harmonised the registries and enabled semantic interoperability. We assessed data quality across the domains of uniqueness, consistency, completeness and correctness. Aggregated data were retrieved using the semantic query language SPARQL Protocol and Resource Description Framework Query Language (SPARQL) and outcome rates were assessed through random effects meta-analysis.Results A total of 5282 cases of AAV were identified. Uniqueness and data-type consistency were 100% across all assessed variables. Completeness and correctness varied from 49%–100% to 60%–100%, respectively. There were 2754 (52.1%) cases classified as granulomatosis with polyangiitis (GPA), 1580 (29.9%) as microscopic polyangiitis and 937 (17.7%) as eosinophilic GPA. The pattern of organ involvement included: lung in 3281 (65.1%), ear-nose-throat in 2860 (56.7%) and kidney in 2534 (50.2%). Intravenous cyclophosphamide was used as remission induction therapy in 982 (50.7%), rituximab in 505 (17.7%) and pulsed intravenous glucocorticoid use was highly variable (11%–91%). Overall mortality and incidence rates of end-stage kidney disease were 28.8 (95% CI 19.7 to 42.2) and 24.8 (95% CI 19.7 to 31.1) per 1000 patient-years, respectively.Conclusions In the largest reported AAV cohort-study, we federated patient registries using semantic web technologies and highlighted concerns about data quality. The comparison of patient characteristics, treatment and outcomes was hampered by heterogeneous recruitment settings
    corecore