135 research outputs found

    Внутригодовые (сезонные) изменения общего содержания биогенных элементов и кислорода в различных районах Севастопольской бухты

    Get PDF
    Для каждого месяца в период май 1998 г. – май 1999 г. рассчитано абсолютное содержание биогенных элементов и кислорода в пяти различных районах Севастопольской бухты и для всей бухты в целом. Показано, что наиболее чистый (возле входа в бухту) и наиболее грязный (Южная бухта) районы отличаются по динамике накопления и расходования биогенных элементов. Максимальный запас неорганических форм азота, фосфора, и кремнекислоты во всех районах Севастопольской бухты, за исключением района Инкерманской бухты, приходится на январь.Total content of biogenic elements and oxygen in five different areas of the Sevastopol Bay and for the whole bay in general is estimated for each month starting from May, 1998 up to May, 1999. It is shown that the purest (near the bay entrance) and the dirtiest (the Southern Bay) areas are distinguished for dynamics of biogenic elements accumulation and expense. Maximum storage of inorganic forms of nitrogen, phosphorus and silicic acid in all the areas of the Sevastopol Bay, excepting the Inkerman Bay area, falls on January

    Measuring the performance of prediction models to personalize treatment choice.

    Get PDF
    When data are available from individual patients receiving either a treatment or a control intervention in a randomized trial, various statistical and machine learning methods can be used to develop models for predicting future outcomes under the two conditions, and thus to predict treatment effect at the patient level. These predictions can subsequently guide personalized treatment choices. Although several methods for validating prediction models are available, little attention has been given to measuring the performance of predictions of personalized treatment effect. In this article, we propose a range of measures that can be used to this end. We start by defining two dimensions of model accuracy for treatment effects, for a single outcome: discrimination for benefit and calibration for benefit. We then amalgamate these two dimensions into an additional concept, decision accuracy, which quantifies the model's ability to identify patients for whom the benefit from treatment exceeds a given threshold. Subsequently, we propose a series of performance measures related to these dimensions and discuss estimating procedures, focusing on randomized data. Our methods are applicable for continuous or binary outcomes, for any type of prediction model, as long as it uses baseline covariates to predict outcomes under treatment and control. We illustrate all methods using two simulated datasets and a real dataset from a trial in depression. We implement all methods in the R package predieval. Results suggest that the proposed measures can be useful in evaluating and comparing the performance of competing models in predicting individualized treatment effect

    Evaluation of clinical prediction models (part 3): calculating the sample size required for an external validation study

    Get PDF
    An external validation study evaluates the performance of a prediction model in new data, but many of these studies are too small to provide reliable answers. In the third article of their series on model evaluation, Riley and colleagues describe how to calculate the sample size required for external validation studies, and propose to avoid rules of thumb by tailoring calculations to the model and setting at hand

    Extreme sensitivity of the spin-splitting and 0.7 anomaly to confining potential in one-dimensional nanoelectronic devices

    Full text link
    Quantum point contacts (QPCs) have shown promise as nanoscale spin-selective components for spintronic applications and are of fundamental interest in the study of electron many-body effects such as the 0.7 x 2e^2/h anomaly. We report on the dependence of the 1D Lande g-factor g* and 0.7 anomaly on electron density and confinement in QPCs with two different top-gate architectures. We obtain g* values up to 2.8 for the lowest 1D subband, significantly exceeding previous in-plane g-factor values in AlGaAs/GaAs QPCs, and approaching that in InGaAs/InP QPCs. We show that g* is highly sensitive to confinement potential, particularly for the lowest 1D subband. This suggests careful management of the QPC's confinement potential may enable the high g* desirable for spintronic applications without resorting to narrow-gap materials such as InAs or InSb. The 0.7 anomaly and zero-bias peak are also highly sensitive to confining potential, explaining the conflicting density dependencies of the 0.7 anomaly in the literature.Comment: 23 pages, 7 figure

    How Well Can We Assess the Validity of Non-Randomised Studies of Medications? A Systematic Review of Assessment Tools

    Get PDF
    Objective To determine whether assessment tools for non-randomised studies (NRS) address critical elements that influence the validity of NRS findings for comparative safety and effectiveness of medications. Design Systematic review and Delphi survey. Data sources We searched PubMed, Embase, Google, bibliographies of reviews and websites of influential organisations from inception to November 2019. In parallel, we conducted a Delphi survey among the International Society for Pharmacoepidemiology Comparative Effectiveness Research Special Interest Group to identify key methodological challenges for NRS of medications. We created a framework consisting of the reported methodological challenges to evaluate the selected NRS tools. Study selection Checklists or scales assessing NRS. Data extraction Two reviewers extracted general information and content data related to the prespecified framework. Results Of 44 tools reviewed, 48% (n=21) assess multiple NRS designs, while other tools specifically addressed case–control (n=12, 27%) or cohort studies (n=11, 25%) only. Response rate to the Delphi survey was 73% (35 out of 48 content experts), and a consensus was reached in only two rounds. Most tools evaluated methods for selecting study participants (n=43, 98%), although only one addressed selection bias due to depletion of susceptibles (2%). Many tools addressed the measurement of exposure and outcome (n=40, 91%), and measurement and control for confounders (n=40, 91%). Most tools have at least one item/question on design-specific sources of bias (n=40, 91%), but only a few investigate reverse causation (n=8, 18%), detection bias (n=4, 9%), time-related bias (n=3, 7%), lack of new-user design (n=2, 5%) or active comparator design (n=0). Few tools address the appropriateness of statistical analyses (n=15, 34%), methods for assessing internal (n=15, 34%) or external validity (n=11, 25%) and statistical uncertainty in the findings (n=21, 48%). None of the reviewed tools investigated all the methodological domains and subdomains. Conclusions The acknowledgement of major design-specific sources of bias (eg, lack of new-user design, lack of active comparator design, time-related bias, depletion of susceptibles, reverse causation) and statistical assessment of internal and external validity is currently not sufficiently addressed in most of the existing tools. These critical elements should be integrated to systematically investigate the validity of NRS on comparative safety and effectiveness of medications

    Spin Degeneracy and Conductance Fluctuations in Open Quantum Dots

    Full text link
    The dependence of mesoscopic conductance fluctuations on parallel magnetic field is used as a probe of spin degeneracy in open GaAs quantum dots. The variance of fluctuations at high parallel field is reduced from the low-field variance (with broken time-reversal symmetry) by factors ranging from roughly two in a 1 square-micron dot at low temperature, to four or greater in 8 square-micron dots. The factor of two is expected for simple Zeeman splitting of spin degenerate channels. A possible explanation for the unexpected larger factors in terms of field-dependent spin orbit scattering is proposed.Comment: Includes new reference to related theoretical work, cond-mat/0010064. Other minor changes. Related papers at http://marcuslab.harvard.ed

    Summarising and validating test accuracy results across multiple studies for use in clinical practice

    Get PDF
    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV

    Framework for the Synthesis of Non-Randomised Studies and Randomised Controlled Trials: A Guidance on Conducting a Systematic Review and Meta-Analysis for Healthcare Decision Making

    Get PDF
    Introduction: High-quality randomised controlled trials (RCTs) provide the most reliable evidence on the comparative efficacy of new medicines. However, non-randomised studies (NRS) are increasingly recognised as a source of insights into the real-world performance of novel therapeutic products, particularly when traditional RCTs are impractical or lack generalisability. This means there is a growing need for synthesising evidence from RCTs and NRS in healthcare decision making, particularly given recent developments such as innovative study designs, digital technologies and linked databases across countries. Crucially, however, no formal framework exists to guide the integration of these data types. Objectives and Methods: To address this gap, we used a mixed methods approach (review of existing guidance, methodological papers, Delphi survey) to develop guidance for researchers and healthcare decision-makers on when and how to best combine evidence from NRS and RCTs to improve transparency and build confidence in the resulting summary effect estimates. Results: Our framework comprises seven steps on guiding the integration and interpretation of evidence from NRS and RCTs and we offer recommendations on the most appropriate statistical approaches based on three main analytical scenarios in healthcare decision making (specifically, ‘high-bar evidence’ when RCTs are the preferred source of evidence, ‘medium,’ and ‘low’ when NRS is the main source of inference). Conclusion: Our framework augments existing guidance on assessing the quality of NRS and their compatibility with RCTs for evidence synthesis, while also highlighting potential challenges in implementing it. This manuscript received endorsement from the International Society for Pharmacoepidemiology

    The development of CHAMP : a checklist for the appraisal of moderators and predictors

    Get PDF
    BACKGROUND: Personalized healthcare relies on the identification of factors explaining why individuals respond differently to the same intervention. Analyses identifying such factors, so called predictors and moderators, have their own set of assumptions and limitations which, when violated, can result in misleading claims, and incorrect actions. The aim of this study was to develop a checklist for critically appraising the results of predictor and moderator analyses by combining recommendations from published guidelines and experts in the field. METHODS: Candidate criteria for the checklist were retrieved through systematic searches of the literature. These criteria were evaluated for appropriateness using a Delphi procedure. Two Delphi rounds yielded a pilot checklist, which was tested on a set of papers included in a systematic review on reinforced home-based palliative care. The results of the pilot informed a third Delphi round, which served to finalize the checklist. RESULTS: Forty-nine appraisal criteria were identified in the literature. Feedback was obtained from fourteen experts from (bio)statistics, epidemiology and other associated fields elicited via three Delphi rounds. Additional feedback from other researchers was collected in a pilot test. The final version of our checklist included seventeen criteria, covering the design (e.g. a priori plausibility), analysis (e.g. use of interaction tests) and results (e.g. complete reporting) of moderator and predictor analysis, together with the transferability of the results (e.g. clinical importance). There are criteria both for individual papers and for bodies of evidence. CONCLUSIONS: The proposed checklist can be used for critical appraisal of reported moderator and predictor effects, as assessed in randomized or non-randomized studies using individual participant or aggregate data. This checklist is accompanied by a user's guide to facilitate implementation. Its future use across a wide variety of research domains and study types will provide insights about its usability and feasibilit

    Risk of bias assessments in individual participant data meta-analyses of test accuracy and prediction models:a review shows improvements are needed

    Get PDF
    OBJECTIVES: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement.STUDY DESIGN AND SETTING: We searched PubMed (January 2018-May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs.RESULTS: Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, 7 (37%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and 4 (21%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as Prediction model Risk Of Bias ASsessment Tool (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD.CONCLUSION: Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this.</p
    corecore