22 research outputs found

    Setting clinical performance specifications to develop and evaluate biomarkers for clinical use

    Get PDF
    Background: Biomarker discovery studies often claim ‘promising’ findings, motivating further studies and marketing as medical tests. Unfortunately, the patient benefits promised are often inadequately explained to guide further evaluation, and few biomarkers have translated to improved patient care. We present a practical guide for setting minimum clinical performance specifications to strengthen clinical performance study design and interpretation. Methods: We developed a step-by-step approach using test evaluation and decision-analytic frameworks and present with illustrative examples. Results: We define clinical performance specifications as a set of criteria that quantify the clinical performance a new test must attain to allow better health outcomes than current practice. We classify the proposed patient benefits of a new test into three broad groups and describe how to set minimum clinical performance at the level where the potential harm of false-positive and false-negative results does not outweigh the benefits. (1) For add-on tests proposed to improve disease outcomes by improving detection, define an acceptable trade-off for false-positive versus true-positive results; (2) for triage tests proposed to reduce unnecessary tests and treatment by ruling out disease, define an acceptable risk of false-negatives as a safety threshold; (3) for replacement tests proposed to provide other benefits, or reduce costs, without compromising accuracy, use existing tests to benchmark minimum accuracy levels. Conclusions: Researchers can follow these guidelines to focus their study objectives and to define statistical hypotheses and sample size requirements. This way, clinical performance studies will allow conclusions about whether test performance is sufficient for intended use

    Southern Ocean cloud and aerosol data: a compilation of measurements from the 2018 Southern Ocean Ross Sea Marine Ecosystems and Environment voyage

    Get PDF
    Due to its remote location and extreme weather conditions, atmospheric in situ measurements are rare in the Southern Ocean. As a result, aerosol–cloud interactions in this region are poorly understood and remain a major source of uncertainty in climate models. This, in turn, contributes substantially to persistent biases in climate model simulations such as the well-known positive shortwave radiation bias at the surface, as well as biases in numerical weather prediction models and reanalyses. It has been shown in previous studies that in situ and ground-based remote sensing measurements across the Southern Ocean are critical for complementing satellite data sets due to the importance of boundary layer and low-level cloud processes. These processes are poorly sampled by satellite-based measurements and are often obscured by multiple overlying cloud layers. Satellite measurements also do not constrain the aerosol–cloud processes very well with imprecise estimation of cloud condensation nuclei. In this work, we present a comprehensive set of ship-based aerosol and meteorological observations collected on the 6-week Southern Ocean Ross Sea Marine Ecosystem and Environment voyage (TAN1802) voyage of RV Tangaroa across the Southern Ocean, from Wellington, New Zealand, to the Ross Sea, Antarctica. The voyage was carried out from 8 February to 21 March 2018. Many distinct, but contemporaneous, data sets were collected throughout the voyage. The compiled data sets include measurements from a range of instruments, such as (i) meteorological conditions at the sea surface and profile measurements; (ii) the size and concentration of particles; (iii) trace gases dissolved in the ocean surface such as dimethyl sulfide and carbonyl sulfide; (iv) and remotely sensed observations of low clouds. Here, we describe the voyage, the instruments, and data processing, and provide a brief overview of some of the data products available. We encourage the scientific community to use these measurements for further analysis and model evaluation studies, in particular, for studies of Southern Ocean clouds, aerosol, and their interaction. The data sets presented in this study are publicly available at https://doi.org/10.5281/zenodo.4060237 (Kremser et al., 2020)

    Extent and Causes of Chesapeake Bay Warming

    Get PDF
    Coastal environments such as the Chesapeake Bay have long been impacted by eutrophication stressors resulting from human activities, and these impacts are now being compounded by global warming trends. However, there are few studies documenting long-term estuarine temperature change and the relative contributions of rivers, the atmosphere, and the ocean. In this study, Chesapeake Bay warming, since 1985, is quantified using a combination of cruise observations and model outputs, and the relative contributions to that warming are estimated via numerical sensitivity experiments with a watershed–estuarine modeling system. Throughout the Bay’s main stem, similar warming rates are found at the surface and bottom between the late 1980s and late 2010s (0.02 +/- 0.02C/year, mean +/- 1 standard error), with elevated summer rates (0.04 +/- 0.01C/year) and lower rates of winter warming (0.01 +/- 0.01C/year). Most (~85%) of this estuarine warming is driven by atmospheric effects. The secondary influence of ocean warming increases with proximity to the Bay mouth, where it accounts for more than half of summer warming in bottom waters. Sea level rise has slightly reduced summer warming, and the influence of riverine warming has been limited to the heads of tidal tributaries. Future rates of warming in Chesapeake Bay will depend not only on global atmospheric trends, but also on regional circulation patterns in mid-Atlantic waters, which are currently warming faster than the atmosphere. Supporting model data available at: https://doi.org/10.25773/c774-a36

    Natural halogens buffer tropospheric ozone in a changing climate

    Get PDF
    Reactive atmospheric halogens destroy tropospheric ozone (O3), an air pollutant and greenhouse gas. The primary source of natural halogens is emissions from marine phytoplankton and algae, as well as abiotic sources from ocean and tropospheric chemistry, but how their fluxes will change under climate warming, and the resulting impacts on O3, are not well known. Here, we use an Earth system model to estimate that natural halogens deplete approximately 13% of tropospheric O3 in the present-day climate. Despite increased levels of natural halogens through the twenty-first century, this fraction remains stable due to compensation from hemispheric, regional and vertical heterogeneity in tropospheric O3 loss. Notably, this halogen-driven O3 buffering is projected to be greatest over polluted and populated regions, due mainly to iodine chemistry, with important implications for air quality

    Setting analytical performance specifications based on outcome studies - is it possible?

    No full text
    The 1st Strategic Conference of the European Federation of Clinical Chemistry and Laboratory Medicine proposed a simplified hierarchy for setting analytical performance specifications (APS). The top two levels of the 1999 Stockholm hierarchy, i.e., evaluation of the effect of analytical performance on clinical outcomes and clinical decisions have been proposed to be replaced by one outcome-based model. This model can be supported by: (1a) direct outcome studies; and (1b) indirect outcome studies investigating the impact of analytical performance of the test on clinical classifications or decisions and thereby on the probability of patient relevant clinical outcomes. This paper reviews the need for outcome-based specifications, the most relevant types of outcomes to be considered, and the challenges and limitations faced when setting outcome-based APS. The methods of Model 1a and b are discussed and examples are provided for how outcome data can be translated to APS using the linked evidence and simulation or decision analytic techniques. Outcome-based APS should primarily reflect the clinical needs of patients; should be tailored to the purpose, role and significance of the test in a well defined clinical pathway; and should be defined at a level that achieves net health benefit for patients at reasonable costs. Whilst it is acknowledged that direct evaluations are difficult and may not be possible for all measurands, all other forms of setting APS should be weighed against that standard, and regarded as approximations. Better definition of the relationship between the analytical performance of tests and health outcomes can be used to set analytical performance criteria that aim to improve the clinical and cost-effectiveness of laboratory test

    Biomarker development targeting unmet clinical needs

    No full text
    The introduction of new biomarkers can lead to inappropriate utilization of tests if they do not fill in existing gaps in clinical care. We aimed to define a strategy and checklist for identifying unmet needs for biomarkers. A multidisciplinary working group used a 4-step process: 1/ scoping literature review; 2/ face-to-face meetings to discuss scope, strategy and checklist items; 3/ iterative process of feedback and consensus to develop the checklist; 4/ testing and refinement of checklist items using case scenarios. We used clinical pathway mapping to identify clinical management decisions linking biomarker testing to health outcomes and developed a 14-item checklist organized into 4 domains: 1/ identifying and 2/ verifying the unmet need; 3/ validating the intended use; and 4/ assessing the feasibility of the new biomarker to influence clinical practice and health outcome. We present an outcome-focused approach that can be used by multiple stakeholders for any medical test, irrespective of the purpose and role of testing. The checklist intends to achieve more efficient biomarker development and translation into practice. We propose the checklist is field tested by stakeholders, and advocate the role of the clinical laboratory professional to foster trans-sector collaboration in this regar

    Practical guide for identifying unmet clinical needs for biomarkers

    No full text
    The development and evaluation of novel biomarkers and testing strategies requires a close examination of existing clinical pathways, including mapping of current pathways and identifying areas of unmet need. This approach enables early recognition of analytical and clinical performance criteria to guide evaluation studies, in a cyclical and iterative manner, all the time keeping the clinical pathway and patient health outcomes as the key drivers in the process. The Test Evaluation Working Group of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM TE-WG) https://www.eflm.eu/site/ page/a/1158 has published a conceptual framework of the test evaluation cycle which is driven by the clinical pathway, inherent to which is the test purpose and role within the pathway that are defined by clinical need. To supplement this framework, the EFLM TE-WG has also published an interactive checklist for identifying unmet clinical needs for new biomarkers; a practical tool that laboratories, clinicians, researchers and industry can equally use in a consistent manner when new tests are developed and before they are released to the market. It is hoped that these practical tools will provide consistent and appropriate terminology in this diverse field and offer a platform that facilitates greater consultation and collaboration between all stakeholders. The checklist should assist the work of all colleagues involved in the discovery of novel biomarkers and implementation of new medical tests. The tool is aligned with the IOM recommendations and the FDA and CE regulating body’s requirements
    corecore