25,906 research outputs found

    On the Numerical Accuracy of Spreadsheets

    Get PDF
    This paper discusses the numerical precision of five spreadsheets (Calc, Excel, Gnumeric, NeoOffice and Oleo) running on two hardware platforms (i386 and amd64) and on three operating systems (Windows Vista, Ubuntu Intrepid and Mac OS Leopard). The methodology consists of checking the number of correct significant digits returned by each spreadsheet when computing the sample mean, standard deviation, first-order autocorrelation, F statistic in ANOVA tests, linear and nonlinear regression and distribution functions. A discussion about the algorithms for pseudorandom number generation provided by these platforms is also conducted. We conclude that there is no safe choice among the spreadsheets here assessed: they all fail in nonlinear regression and they are not suited for Monte Carlo experiments.

    How good are MatLab, Octave and Scilab for Computational Modelling?

    Full text link
    In this article we test the accuracy of three platforms used in computational modelling: MatLab, Octave and Scilab, running on i386 architecture and three operating systems (Windows, Ubuntu and Mac OS). We submitted them to numerical tests using standard data sets and using the functions provided by each platform. A Monte Carlo study was conducted in some of the datasets in order to verify the stability of the results with respect to small departures from the original input. We propose a set of operations which include the computation of matrix determinants and eigenvalues, whose results are known. We also used data provided by NIST (National Institute of Standards and Technology), a protocol which includes the computation of basic univariate statistics (mean, standard deviation and first-lag correlation), linear regression and extremes of probability distributions. The assessment was made comparing the results computed by the platforms with certified values, that is, known results, computing the number of correct significant digits.Comment: Accepted for publication in the Computational and Applied Mathematics journa

    Database Analysis to Support Nutrient Criteria Development (Phase II)

    Get PDF
    The intent of this publication of the Arkansas Water Resources Center is to provide a location whereby a final report on water research to a funding agency can be archived. The Texas Commission on Environmental Quality (TCEQ) contracted with University of Arkansas researchers for a multiple year project titled “Database Analysis to Support Nutrient Criteria Development”. This publication covers the second of three phases of that project and has maintained the original format of the report as submitted to TCEQ. This report can be cited either as an AWRC publication (see below) or directly as the final report to TCEQ

    Building-in quality rather than assessing quality afterwards: a technological solution to ensuring computational accuracy in learning materials

    Get PDF
    [Abstract]: Quality encompasses a very broad range of ideas in learning materials, yet the accuracy of the content is often overlooked as a measure of quality. Various aspects of accuracy are briefly considered, and the issue of computational accuracy is then considered further. When learning materials are produced containing the results of mathematical computations, accuracy is essential: but how can the results of these computations be known to be correct? A solution is to embed the instructions for performing the calculations in the materials, and let the computer calculate the result and place it in the text. In this way, quality is built into the learning materials by design, not evaluated after the event. This is all accomplished using the ideas of literate programming, applied to the learning materials context. A small example demonstrates how remarkably easy the ideas are to apply in practice using the appropriate technology. Given that the technology is available and is easy to use, it would appear imperative that the approach discussed is adopted to improve quality in learning materials containing computational results

    Statistical Tests, Tests of Significance, and Tests of a Hypothesis Using Excel

    Get PDF
    Microsoft’s spreadsheet program Excel has many statistical functions and routines. Over the years there have been criticisms about the inaccuracies of these functions and routines (see McCullough 1998, 1999). This article reviews some of these statistical methods used to test for differences between two samples. In practice, the analysis is done by a software program and often with the actual method used unknown. The user has to select the method and variations to be used, without full knowledge of just what calculations are used. Usually there is no convenient trace back to textbook explanations. This article describes the Excel algorithm and gives textbook related explanations to bolster Microsoft’s Help explanations

    On the Numerical Accuracy of Spreadsheets

    Get PDF
    This paper discusses the numerical precision of five spreadsheets (Calc, Excel, Gnumeric, NeoOffice and Oleo) running on two hardware platforms (i386 and amd64) and on three operating systems (Windows Vista, Ubuntu Intrepid and Mac OS Leopard). The methodology consists of checking the number of correct significant digits returned by each spreadsheet when computing the sample mean, standard deviation, first-order autocorrelation, F statistic in ANOVA tests, linear and nonlinear regression and distribution functions. A discussion about the algorithms for pseudorandom number generation provided by these platforms is also conducted. We conclude that there is no safe choice among the spreadsheets here assessed: they all fail in nonlinear regression and they are not suited for Monte Carlo experiments

    Statistical power analysis with microsoft excel: normal tests for one or two means as a prelude to using non-central distributions to calculate power

    Get PDF
    This article presents statistical power analysis (SPA) based on the normal distribution using Excel, adopting textbook and SPA approaches. The objective is to present the latter in a comparative way within a framework that is familiar to textbook level readers, as a first step to understand SPA with other distributions. The analysis focuses on the case of the equality of the means of two populations with equal variances for independent samples with the same size. This is the situation adopted as case 0 by Cohen (1988), a pioneer in the subject, to develop his set of tables and so, the present article can be seen as an introduction to Cohen's methodology applied to tests based on samples from normal populations. We also discuss how to extend the calculation to cases with other characteristics (cases 1 to 4), similarly to what Cohen proposes, as well as a brief discussion about the advantages and shortcomings of Excel. We teach mainly in the area of business and economics, which determines the scope of our analysis.info:eu-repo/semantics/publishedVersio
    corecore