396,059 research outputs found

    Rethinking the Effective Sample Size

    Full text link
    The effective sample size (ESS) is widely used in sample-based simulation methods for assessing the quality of a Monte Carlo approximation of a given distribution and of related integrals. In this paper, we revisit and complete the approximation of the ESS in the specific context of importance sampling (IS). The derivation of this approximation, that we will denote as ESS^\widehat{\text{ESS}}, is only partially available in Kong [1992]. This approximation has been widely used in the last 25 years due to its simplicity as a practical rule of thumb in a wide variety of importance sampling methods. However, we show that the multiple assumptions and approximations in the derivation of ESS^\widehat{\text{ESS}}, makes it difficult to be considered even as a reasonable approximation of the ESS. We extend the discussion of the ESS in the multiple importance sampling (MIS) setting, and we display numerical examples. This paper does not cover the use of ESS for MCMC algorithms

    Improving detection probabilities for pests in stored grain

    Get PDF
    BACKGROUND: The presence of insects in stored grains is a significant problem for grain farmers, bulk grain handlers and distributors worldwide. Inspections of bulk grain commodities is essential to detect pests and therefore to reduce the risk of their presence in exported goods. It has been well documented that insect pests cluster in response to factors such as microclimatic conditions within bulk grain. Statistical sampling methodologies for grains, however, have typically considered pests and pathogens to be homogeneously distributed throughout grain commodities. In this paper we demonstrate a sampling methodology that accounts for the heterogeneous distribution of insects in bulk grains. RESULTS: We show that failure to account for the heterogeneous distribution of pests may lead to overestimates of the capacity for a sampling program to detect insects in bulk grains. Our results indicate the importance of the proportion of grain that is infested in addition to the density of pests within the infested grain. We also demonstrate that the probability of detecting pests in bulk grains increases as the number of sub-samples increases, even when the total volume or mass of grain sampled remains constant. CONCLUSION: This study demonstrates the importance of considering an appropriate biological model when developing sampling methodologies for insect pests. Accounting for a heterogeneous distribution of pests leads to a considerable improvement in the detection of pests over traditional sampling models

    Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure

    Get PDF
    Scenario generation is the construction of a discrete random vector to represent parameters of uncertain values in a stochastic program. Most approaches to scenario generation are distribution-driven, that is, they attempt to construct a random vector which captures well in a probabilistic sense the uncertainty. On the other hand, a problem-driven approach may be able to exploit the structure of a problem to provide a more concise representation of the uncertainty. In this paper we propose an analytic approach to problem-driven scenario generation. This approach applies to stochastic programs where a tail risk measure, such as conditional value-at-risk, is applied to a loss function. Since tail risk measures only depend on the upper tail of a distribution, standard methods of scenario generation, which typically spread their scenarios evenly across the support of the random vector, struggle to adequately represent tail risk. Our scenario generation approach works by targeting the construction of scenarios in areas of the distribution corresponding to the tails of the loss distributions. We provide conditions under which our approach is consistent with sampling, and as proof-of-concept demonstrate how our approach could be applied to two classes of problem, namely network design and portfolio selection. Numerical tests on the portfolio selection problem demonstrate that our approach yields better and more stable solutions compared to standard Monte Carlo sampling

    Effective Sample Size for Importance Sampling based on discrepancy measures

    Full text link
    The Effective Sample Size (ESS) is an important measure of efficiency of Monte Carlo methods such as Markov Chain Monte Carlo (MCMC) and Importance Sampling (IS) techniques. In the IS context, an approximation ESS^\widehat{ESS} of the theoretical ESS definition is widely applied, involving the inverse of the sum of the squares of the normalized importance weights. This formula, ESS^\widehat{ESS}, has become an essential piece within Sequential Monte Carlo (SMC) methods, to assess the convenience of a resampling step. From another perspective, the expression ESS^\widehat{ESS} is related to the Euclidean distance between the probability mass described by the normalized weights and the discrete uniform probability mass function (pmf). In this work, we derive other possible ESS functions based on different discrepancy measures between these two pmfs. Several examples are provided involving, for instance, the geometric mean of the weights, the discrete entropy (including theperplexity measure, already proposed in literature) and the Gini coefficient among others. We list five theoretical requirements which a generic ESS function should satisfy, allowing us to classify different ESS measures. We also compare the most promising ones by means of numerical simulations

    Bayesian computation via empirical likelihood

    Full text link
    Approximate Bayesian computation (ABC) has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the ABC parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The BCel algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.Comment: 21 pages, 12 figures, revised version of the previous version with a new titl

    Approximating Probability Densities by Iterated Laplace Approximations

    Full text link
    The Laplace approximation is an old, but frequently used method to approximate integrals for Bayesian calculations. In this paper we develop an extension of the Laplace approximation, by applying it iteratively to the residual, i.e., the difference between the current approximation and the true function. The final approximation is thus a linear combination of multivariate normal densities, where the coefficients are chosen to achieve a good fit to the target distribution. We illustrate on real and artificial examples that the proposed procedure is a computationally efficient alternative to current approaches for approximation of multivariate probability densities. The R-package iterLap implementing the methods described in this article is available from the CRAN servers.Comment: to appear in Journal of Computational and Graphical Statistics, http://pubs.amstat.org/loi/jcg

    A systematic review of the role of bisphosphonates in metastatic disease

    Get PDF
    Objectives: To identify evidence for the role of bisphosphonates in malignancy for the treatment of hypercalcaemia, prevention of skeletal morbidity and use in the adjuvant setting. To perform an economic review of current literature and model the cost effectiveness of bisphosphonates in the treatment of hypercalcaemia and prevention of skeletal morbidity Data sources: Electronic databases (1966-June 2001). Cochrane register. Pharmaceutical companies. Experts in the field. Handsearching of abstracts and leading oncology journals (1999-2001). Review methods: Two independent reviewers assessed studies for inclusion, according to predetermined criteria, and extracted relevant data. Overall event rates were pooled in a meta-analysis, odds ratios ( OR) were given with 95% confidence intervals (CI). Where data could not be combined, studies were reported individually and proportions compared using chi- squared analysis. Cost and cost-effectiveness were assessed by a decision analytic model comparing different bisphosphonate regimens for the treatment of hypercalcaemia; Markov models were employed to evaluate the use of bisphosphonates to prevent skeletal-related events (SRE) in patients with breast cancer and multiple myeloma. Results: For acute hypercalcaemia of malignancy, bisphosphonates normalised serum calcium in >70% of patients within 2-6 days. Pamidronate was more effective than control, etidronate, mithramycin and low-dose clodronate, but equal to high dose clodronate, in achieving normocalcaemia. Pamidronate prolongs ( doubles) the median time to relapse compared with clodronate or etidronate. For prevention of skeletal morbidity, bisphosphonates compared with placebo, significantly reduced the OR for fractures (OR [95% CI], vertebral, 0.69 [0.57-0.84], non-vertebral, 0.65 [0.54-0.79], combined, 0.65 [0.55-0.78]) radiotherapy 0.67 [0.57-0.79] and hypercalcaemia 0.54 [0.36-0.81] but not orthopaedic surgery 0.70 [0.46-1.05] or spinal cord compression 0.71 [0.47-1.08]. However, reduction in orthopaedic surgery was significant in studies that lasted over a year 0.59 [0.39-0.88]. Bisphosphonates significantly increased the time to first SRE but did not affect survival. Subanalyses were performed for disease groups, drugs and route of administration. Most evidence supports the use of intravenous aminobisphosphonates. For adjuvant use of bisphosphonates, Clodronate, given to patients with primary operable breast cancer and no metastatic disease, significantly reduced the number of patients developing bone metastases. This benefit was not maintained once regular administration had been discontinued. Two trials reported significant survival advantages in the treated groups. Bisphosphonates reduce the number of bone metastases in patients with both early and advanced breast cancer. Bisphosphonates are well tolerated with a low incidence of side-effects. Economic modelling showed that for acute hypercalcaemia, drugs with the longest cumulative duration of normocalcaemia were most cost-effective. Zoledronate 4 mg was the most costly, but most cost-effective treatment. For skeletal morbidity, Markov models estimated that the overall cost of bisphosphonate therapy to prevent an SRE was pound250 and pound1500 per event for patients with breast cancer and multiple myeloma, respectively. Bisphosphonate treatment is sometimes cost-saving in breast cancer patients where fractures are prevented. Conclusions: High dose aminobisphosphonates are most effective for the treatment of acute hypercalcaemia and delay time to relapse. Bisphosphonates significantly reduce SREs and delay the time to first SRE in patients with bony metastatic disease but do not affect survival. Benefit is demonstrated after administration for at least 6-12 months. The greatest body of evidence supports the use of intravenous aminobisphosphonates. Further evidence is required to support use in the adjuvant setting

    Standardized or simple effect size: what should be reported?

    Get PDF
    It is regarded as best practice for psychologists to report effect size when disseminating quantitative research findings. Reporting of effect size in the psychological literature is patchy – though this may be changing – and when reported it is far from clear that appropriate effect size statistics are employed. This paper considers the practice of reporting point estimates of standardized effect size and explores factors such as reliability, range restriction and differences in design that distort standardized effect size unless suitable corrections are employed. For most purposes simple (unstandardized) effect size is more robust and versatile than standardized effect size. Guidelines for deciding what effect size metric to use and how to report it are outlined. Foremost among these are: i) a preference for simple effect size over standardized effect size, and ii) the use of confidence intervals to indicate a plausible range of values the effect might take. Deciding on the appropriate effect size statistic to report always requires careful thought and should be influenced by the goals of the researcher, the context of the research and the potential needs of readers
    • …
    corecore