355 research outputs found

    Внутригодовые (сезонные) изменения общего содержания биогенных элементов и кислорода в различных районах Севастопольской бухты

    Get PDF
    Для каждого месяца в период май 1998 г. – май 1999 г. рассчитано абсолютное содержание биогенных элементов и кислорода в пяти различных районах Севастопольской бухты и для всей бухты в целом. Показано, что наиболее чистый (возле входа в бухту) и наиболее грязный (Южная бухта) районы отличаются по динамике накопления и расходования биогенных элементов. Максимальный запас неорганических форм азота, фосфора, и кремнекислоты во всех районах Севастопольской бухты, за исключением района Инкерманской бухты, приходится на январь.Total content of biogenic elements and oxygen in five different areas of the Sevastopol Bay and for the whole bay in general is estimated for each month starting from May, 1998 up to May, 1999. It is shown that the purest (near the bay entrance) and the dirtiest (the Southern Bay) areas are distinguished for dynamics of biogenic elements accumulation and expense. Maximum storage of inorganic forms of nitrogen, phosphorus and silicic acid in all the areas of the Sevastopol Bay, excepting the Inkerman Bay area, falls on January

    CFA2: a Context-Free Approach to Control-Flow Analysis

    Full text link
    In a functional language, the dominant control-flow mechanism is function call and return. Most higher-order flow analyses, including k-CFA, do not handle call and return well: they remember only a bounded number of pending calls because they approximate programs with control-flow graphs. Call/return mismatch introduces precision-degrading spurious control-flow paths and increases the analysis time. We describe CFA2, the first flow analysis with precise call/return matching in the presence of higher-order functions and tail calls. We formulate CFA2 as an abstract interpretation of programs in continuation-passing style and describe a sound and complete summarization algorithm for our abstract semantics. A preliminary evaluation shows that CFA2 gives more accurate data-flow information than 0CFA and 1CFA.Comment: LMCS 7 (2:3) 201

    Combining individual patient data from randomized and non-randomized studies to predict real-world effectiveness of interventions.

    Get PDF
    Meta-analysis of randomized controlled trials is generally considered the most reliable source of estimates of relative treatment effects. However, in the last few years, there has been interest in using non-randomized studies to complement evidence from randomized controlled trials. Several meta-analytical models have been proposed to this end. Such models mainly focussed on estimating the average relative effects of interventions. In real-life clinical practice, when deciding on how to treat a patient, it might be of great interest to have personalized predictions of absolute outcomes under several available treatment options. This paper describes a general framework for developing models that combine individual patient data from randomized controlled trials and non-randomized study when aiming to predict outcomes for a set of competing medical interventions applied in real-world clinical settings. We also discuss methods for measuring the models' performance to identify the optimal model to use in each setting. We focus on the case of continuous outcomes and illustrate our methods using a data set from rheumatoid arthritis, comprising patient-level data from three randomized controlled trials and two registries from Switzerland and Britain

    Measuring the performance of prediction models to personalize treatment choice.

    Get PDF
    When data are available from individual patients receiving either a treatment or a control intervention in a randomized trial, various statistical and machine learning methods can be used to develop models for predicting future outcomes under the two conditions, and thus to predict treatment effect at the patient level. These predictions can subsequently guide personalized treatment choices. Although several methods for validating prediction models are available, little attention has been given to measuring the performance of predictions of personalized treatment effect. In this article, we propose a range of measures that can be used to this end. We start by defining two dimensions of model accuracy for treatment effects, for a single outcome: discrimination for benefit and calibration for benefit. We then amalgamate these two dimensions into an additional concept, decision accuracy, which quantifies the model's ability to identify patients for whom the benefit from treatment exceeds a given threshold. Subsequently, we propose a series of performance measures related to these dimensions and discuss estimating procedures, focusing on randomized data. Our methods are applicable for continuous or binary outcomes, for any type of prediction model, as long as it uses baseline covariates to predict outcomes under treatment and control. We illustrate all methods using two simulated datasets and a real dataset from a trial in depression. We implement all methods in the R package predieval. Results suggest that the proposed measures can be useful in evaluating and comparing the performance of competing models in predicting individualized treatment effect

    Incorporating published univariable associations in diagnostic and prognostic modeling

    Get PDF
    Background: Diagnostic and prognostic literature is overwhelmed with studies reporting univariable predictor-outcome associations. Currently, methods to incorporate such information in the construction of a prediction model are underdeveloped and unfamiliar to many researchers. Methods. This article aims to improve upon an adaptation method originally proposed by Greenland (1987) and Steyerberg (2000) to incorporate previously published univariable associations in the construction of a novel prediction model. The proposed method improves upon the variance estimation component by reconfiguring the adaptation process in established theory and making it more robust. Different variants of the proposed method were tested in a simulation study, where performance was measured by comparing estimated associations with their predefined values according to the Mean Squared Error and coverage of the 90% confidence intervals. Results: Results demonstrate that performance of estimated multivariable associations considerably improves for small datasets where external evidence is included. Although the error of estimated associations decreases with increasing amount of individual participant data, it does not disappear completely, even in very large datasets. Conclusions: The proposed method to aggregate previously published univariable associations with individual participant data in the construction of a novel prediction models outperforms established approaches and is especially worthwhile when relatively limited individual participant data are available

    Current trends in the application of causal inference methods to pooled longitudinal observational infectious disease studies-A protocol for a methodological systematic review

    Get PDF
    INTRODUCTION: Pooling (or combining) and analysing observational, longitudinal data at the individual level facilitates inference through increased sample sizes, allowing for joint estimation of study- and individual-level exposure variables, and better enabling the assessment of rare exposures and diseases. Empirical studies leveraging such methods when randomization is unethical or impractical have grown in the health sciences in recent years. The adoption of so-called causal methods to account for both/either measured and/or unmeasured confounders is an important addition to the methodological toolkit for understanding the distribution, progression, and consequences of infectious diseases (IDs) and interventions on IDs. In the face of the Covid-19 pandemic and in the absence of systematic randomization of exposures or interventions, the value of these methods is even more apparent. Yet to our knowledge, no studies have assessed how causal methods involving pooling individual-level, observational, longitudinal data are being applied in ID-related research. In this systematic review, we assess how these methods are used and reported in ID-related research over the last 10 years. Findings will facilitate evaluation of trends of causal methods for ID research and lead to concrete recommendations for how to apply these methods where gaps in methodological rigor are identified. METHODS AND ANALYSIS: We will apply MeSH and text terms to identify relevant studies from EBSCO (Academic Search Complete, Business Source Premier, CINAHL, EconLit with Full Text, PsychINFO), EMBASE, PubMed, and Web of Science. Eligible studies are those that apply causal methods to account for confounding when assessing the effects of an intervention or exposure on an ID-related outcome using pooled, individual-level data from 2 or more longitudinal, observational studies. Titles, abstracts, and full-text articles, will be independently screened by two reviewers using Covidence software. Discrepancies will be resolved by a third reviewer. This systematic review protocol has been registered with PROSPERO (CRD42020204104)

    Multiple Imputation for Multilevel Data with Continuous and Binary Variables

    Get PDF
    We present and compare multiple imputation methods for multilevel continuous and binary data where variables are systematically and sporadically missing. The methods are compared from a theoretical point of view and through an extensive simulation study motivated by a real dataset comprising multiple studies. The comparisons show that these multiple imputation methods are the most appropriate to handle missing values in a multilevel setting and why their relative performances can vary according to the missing data pattern, the multilevel structure and the type of missing variables. This study shows that valid inferences can only be obtained if the dataset includes a large number of clusters. In addition, it highlights that heteroscedastic multiple imputation methods provide more accurate inferences than homoscedastic methods, which should be reserved for data with few individuals per cluster. Finally, guidelines are given to choose the most suitable multiple imputation method according to the structure of the data

    Developing and validating risk prediction models in an individual participant data meta-analysis

    Get PDF
    BACKGROUND: Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. METHODS: A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. RESULTS: The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. CONCLUSIONS: An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction

    (Post-)queer citizenship in contemporary republican France

    Get PDF
    1996 saw the publication of Frédéric Martel’s Le Rose et le noir, a comprehensive study of three decades of gay life in metropolitan France. The predominantly anti-communitarian stance adopted by Martel in the epilogue to the first edition of his work had evolved, by the time of the book’s publication en poche in 2000, into a more nuanced view of the interactions and intersections between queer and republican identities in contemporary France. This development was influenced, in large part, by concrete changes which took place over the second half of the 1990s, centring around the introduction of the PACS in 1999, and leading to an ever-broadening debate. This paper will begin by setting forth the ways in which Martel’s position changed and analysing the attitudinal, social, and legislative backdrop which paved the way for such a change to occur. It will then bring Martel’s work into a dialogue with the writings of Eric Fassin and Maxime Foerster, both of whom have, like Martel, offered crucial analyses of the place of queer citizens within the contemporary French republic. Particular attention will first be paid to the ways in which Fassin, in his writings, has underlined the salience of the ‘droit du sol/droit du sang’ debate, traditionally associated with questions of ethnic belonging, in light of public and political discussions revolving around questions of queer kinship raised by the introduction of the PACS. This will lead into an examination of Foerster’s assertion that gay citizens of the Republic, in the era of the PACS, find themselves in a role previously held by women, in other words, as elements that require integration within a republican model. Foerster argues that this requirement to integrate is indicative of the fact that the traditional republican claim that the citizen is a blank canvas is at best misguided, and, at worst, has been deliberately subverted. This paper will examine the manner in which Martel and Fassin’s observations can be used to further strengthen the points raised by Foerster, concluding with the latter that a true engagement with the issues raised by debates around queer citizenship over the past decade can, in fact, allow the contemporary republican citizen to ‘devenir ceux [qu’il] est’. In other words, the article will conclude that the potential impact of the PACS legislation and the broader discussions it has provoked could be a renegotiation of the relationship between queer citizens and the republic

    Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model

    Get PDF
    Objectives Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. Study Design and Setting We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. Results In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥ 0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Conclusion Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies
    corecore