399 research outputs found

    Predicting reference points and associated uncertainty from life histories for risk and status assessment

    Get PDF
    To assess status of fish populations and the risks of overexploitation, management bodies compare fishing mortality rates and abundance estimates with reference points (RP). Generic, “data-poor” methods for estimating RP are garnering attention because they are faster and cheaper to implement than those based on extensive life history data. Yet data-poor RP are subject to many unquantified uncertainties. Here, we predict fishing mortality RP based on five levels of increasingly comprehensive data, to quantify effects of parameter and structural uncertainty on RP. Level I RP (least data) are estimated solely from species' maximum size and generic life history relationships, while level V RP (most data) are estimated from population-specific growth and maturity data. By estimating RP at all five data levels, for each of 12 North Sea populations, we demonstrate marked changes in the median RP values, and to a lesser extent uncertainty, when growth parameters come from data rather than life history relationships. As a simple rule, halving the median level I RP gives almost 90% probability that a level V median RP is not exceeded. RP and uncertainty were substantially affected by assumed gear selectivity; plausible changes in selectivity had a greater effect on RP than adding level V data. Calculations of RP using data for successive individual years from 1984 to 2014 showed that the median RP based on data for any given year would often fall outside the range of uncertainty for RP based on data from earlier or later years. This highlighted the benefits of frequent RP updates when suitable data are available. Our approach provides a quantitative method to inform risk-based management and decisions about acceptable targets for data collection and quality. Ultimately, however, the utility and extent of adoption of data-poor methods for estimating RP will depend on the risk aversion of managers

    Probabilistic methods for seasonal forecasting in a changing climate: Cox-type regression models

    Get PDF
    For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept to other positive variables of interest beyond the time domain. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Niño/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictor

    Vietnam’s Low-Cost COVID-19 Strategy

    Get PDF
    Tightened border controls, agile health departments, tech platforms, and a hand-washing song that went viral have added up to a frugal but highly effective response to the threat of COVID-19. The country's success provides a model that other developing and emerging economies should follow

    Representing classifier confidence in the safety critical domain: an illustration from mortality prediction in trauma cases

    Get PDF
    Copyright © 2007 Springer Verlag. The final publication is available at link.springer.comThis work proposes a novel approach to assessing confidence measures for software classification systems in demanding applications such as those in the safety critical domain. Our focus is the Bayesian framework for developing a model-averaged probabilistic classifier implemented using Markov chain Monte Carlo (MCMC) and where appropriate its reversible jump variant (RJ-MCMC). Within this context we suggest a new technique, building on the reject region idea, to identify areas in feature space that are associated with "unsure" classification predictions. We term such areas "uncertainty envelopes" and they are defined in terms of the full characteristics of the posterior predictive density in different regions of the feature space. We argue this is more informative than use of a traditional reject region which considers only point estimates of predictive probabilities. Results from the method we propose are illustrated on synthetic data and also usefully applied to real life safety critical systems involving medical trauma data

    Asteroseismic fundamental properties of solar-type stars observed by the NASA Kepler Mission

    Get PDF
    We use asteroseismic data obtained by the NASA Kepler Mission to estimate the fundamental properties of more than 500 main-sequence and sub-giant stars. Data obtained during the first 10 months of Kepler science operations were used for this work, when these solar-type targets were observed for one month each in a survey mode. Stellar properties have been estimated using two global asteroseismic parameters and complementary photometric and spectroscopic data. Homogeneous sets of effective temperatures were available for the entire ensemble from complementary photometry; spectroscopic estimates of T_eff and [Fe/H] were available from a homogeneous analysis of ground-based data on a subset of 87 stars. [Abbreviated version... see paper for full abstract.]Comment: Accepted for publication in ApJS; 90 pages, 22 figures, 6 tables. Units on rho in tables now listed correctly as rho(Sun

    Constraints on WIMP Dark Matter from the High Energy PAMELA pˉ/p\bar{p}/p data

    Get PDF
    A new calculation of the pˉ/p\bar{p}/p ratio in cosmic rays is compared to the recent PAMELA data. The good match up to 100 GeV allows to set constraints on exotic contributions from thermal WIMP dark matter candidates. We derive stringent limits on possible enhancements of the WIMP \pbar flux: a mWIMPm_{\rm WIMP}=100 GeV (1 TeV) signal cannot be increased by more than a factor 6 (40) without overrunning PAMELA data. Annihilation through the W+W−W^+W^- channel is also inspected and cross-checked with e+/(e−+e+)e^+/(e^-+e^+) data. This scenario is strongly disfavored as it fails to simultaneously reproduce positron and antiproton measurements.Comment: 5 pages, 5 figures, the bibliography has been updated, minor modifications have been made in the tex

    Computing with confidence: a Bayesian approach

    Get PDF
    Bayes’ rule is introduced as a coherent strategy for multiple recomputations of classifier system output, and thus as a basis for assessing the uncertainty associated with a particular system results --- i.e. a basis for confidence in the accuracy of each computed result. We use a Markov-Chain Monte Carlo method for efficient selection of recomputations to approximate the computationally intractable elements of the Bayesian approach. The estimate of the confidence to be placed in any classification result provides a sound basis for rejection of some classification results. We present uncertainty envelopes as one way to derive these confidence estimates from the population of recomputed results. We show that a coarse SURE or UNSURE confidence rating based on a threshold of agreed classifications works well, not only pinpointing those results that are reliable but also in indicating input data problems, such as corrupted or incomplete data, or application of an inadequate classifier model

    Predicting the long-term impact of antiretroviral therapy scale-up on population incidence of tuberculosis.

    Get PDF
    OBJECTIVE: To investigate the impact of antiretroviral therapy (ART) on long-term population-level tuberculosis disease (TB) incidence in sub-Saharan Africa. METHODS: We used a mathematical model to consider the effect of different assumptions about life expectancy and TB risk during long-term ART under alternative scenarios for trends in population HIV incidence and ART coverage. RESULTS: All the scenarios we explored predicted that the widespread introduction of ART would initially reduce population-level TB incidence. However, many modelled scenarios projected a rebound in population-level TB incidence after around 20 years. This rebound was predicted to exceed the TB incidence present before ART scale-up if decreases in HIV incidence during the same period were not sufficiently rapid or if the protective effect of ART on TB was not sustained. Nevertheless, most scenarios predicted a reduction in the cumulative TB incidence when accompanied by a relative decline in HIV incidence of more than 10% each year. CONCLUSIONS: Despite short-term benefits of ART scale-up on population TB incidence in sub-Saharan Africa, longer-term projections raise the possibility of a rebound in TB incidence. This highlights the importance of sustaining good adherence and immunologic response to ART and, crucially, the need for effective HIV preventive interventions, including early widespread implementation of ART
    • 

    corecore