1,137 research outputs found

    Two-sample Bayesian Nonparametric Hypothesis Testing

    Full text link
    In this article we describe Bayesian nonparametric procedures for two-sample hypothesis testing. Namely, given two sets of samples y(1)  \mathbf{y}^{\scriptscriptstyle(1)}\;\stackrel{\scriptscriptstyle{iid}}{\s im}  F(1)\;F^{\scriptscriptstyle(1)} and y(2)  \mathbf{y}^{\scriptscriptstyle(2 )}\;\stackrel{\scriptscriptstyle{iid}}{\sim}  F(2)\;F^{\scriptscriptstyle( 2)}, with F(1),F(2)F^{\scriptscriptstyle(1)},F^{\scriptscriptstyle(2)} unknown, we wish to evaluate the evidence for the null hypothesis H0:F(1)≡F(2)H_0:F^{\scriptscriptstyle(1)}\equiv F^{\scriptscriptstyle(2)} versus the alternative H1:F(1)≠F(2)H_1:F^{\scriptscriptstyle(1)}\neq F^{\scriptscriptstyle(2)}. Our method is based upon a nonparametric P\'{o}lya tree prior centered either subjectively or using an empirical procedure. We show that the P\'{o}lya tree prior leads to an analytic expression for the marginal likelihood under the two hypotheses and hence an explicit measure of the probability of the null Pr(H0∣{y(1),y(2)})\mathrm{Pr}(H_0|\{\mathbf {y}^{\scriptscriptstyle(1)},\mathbf{y}^{\scriptscriptstyle(2)}\}\mathbf{)}.Comment: Published at http://dx.doi.org/10.1214/14-BA914 in the Bayesian Analysis (http://projecteuclid.org/euclid.ba) by the International Society of Bayesian Analysis (http://bayesian.org/

    Accurate and Efficiently Vectorized Sums and Dot Products in Julia

    Get PDF
    Version submitted to the Correctness2019 workshopThis paper presents an efficient, vectorized implementation of various summation and dot product algorithms in the Julia programming language. These implementations are available under an open source license in the AccurateArithmetic.jl Julia package.Besides naive algorithms, compensated algorithms are implemented: the Kahan-Babuška-Neumaier summation algorithm, and the Ogita-Rump-Oishi simply compensated summation and dot product algorithms. These algorithms effectively double the working precision, producing much more accurate results while incurring little to no overhead, especially for large input vectors.This paper also tries and builds upon this example to make a case for a more widespread use of Julia in the HPC community. Although the vectorization of compensated algorithms is no particularly simple task, Julia makes it relatively easy and straightforward. It also allows structuring the code in small, composable building blocks, closely matching textbook algorithms yet efficiently compiled

    Less is More: Exploiting the Standard Compiler Optimization Levels for Better Performance and Energy Consumption

    Get PDF
    This paper presents the interesting observation that by performing fewer of the optimizations available in a standard compiler optimization level such as -O2, while preserving their original ordering, significant savings can be achieved in both execution time and energy consumption. This observation has been validated on two embedded processors, namely the ARM Cortex-M0 and the ARM Cortex-M3, using two different versions of the LLVM compilation framework; v3.8 and v5.0. Experimental evaluation with 71 embedded benchmarks demonstrated performance gains for at least half of the benchmarks for both processors. An average execution time reduction of 2.4% and 5.3% was achieved across all the benchmarks for the Cortex-M0 and Cortex-M3 processors, respectively, with execution time improvements ranging from 1% up to 90% over the -O2. The savings that can be achieved are in the same range as what can be achieved by the state-of-the-art compilation approaches that use iterative compilation or machine learning to select flags or to determine phase orderings that result in more efficient code. In contrast to these time consuming and expensive to apply techniques, our approach only needs to test a limited number of optimization configurations, less than 64, to obtain similar or even better savings. Furthermore, our approach can support multi-criteria optimization as it targets execution time, energy consumption and code size at the same time.Comment: 15 pages, 3 figures, 71 benchmarks used for evaluatio

    Meta-learning Control Variates: Variance Reduction with Limited Data

    Get PDF
    Control variates can be a powerful tool to reduce the variance of Monte Carlo estimators, but constructing effective control variates can be challenging when the number of samples is small. In this paper, we show that when a large number of related integrals need to be computed, it is possible to leverage the similarity between these integration tasks to improve performance even when the number of samples per task is very small. Our approach, called meta learning CVs (Meta-CVs), can be used for up to hundreds or thousands of tasks. Our empirical assessment indicates that Meta-CVs can lead to significant variance reduction in such settings, and our theoretical analysis establishes general conditions under which Meta-CVs can be successfully trained
    • …
    corecore