372 research outputs found

    Rao-Lovric and the Triwizard Point Null Hypothesis Tournament

    Get PDF
    The debate if the point null hypothesis is ever literally true cannot be resolved, because there are three competing statistical systems claiming ownership of the construct. The local resolution depends on personal acclimatization to a Fisherian, Frequentist, or Bayesian orientation (or an unexpected fourth champion if decision theory is allowed to compete). Implications of Rao and Lovric’s proposed Hodges-Lehman paradigm are discussed in the Appendix

    Misconceptions Leading to Choosing the t Test Over the Wilcoxon Mann-Whitney Test for Shift in Location Parameter

    Get PDF
    There exist many misconceptions in choosing the t over the Wilcoxon Rank-Sum test when testing for shift. Examples are given in the following three groups: (1) false statement, (2) true premise, but false conclusion, and (3) true statement irrelevant in choosing between the t test and the Wilcoxon Rank Sum test

    Fermat, Schubert, Einstein, and Behrens-Fisher: The Probable Difference Between Two Means When σ_1^2≠σ_2^2

    Get PDF
    The history of the Behrens-Fisher problem and some approximate solutions are reviewed. In outlining relevant statistical hypotheses on the probable difference between two means, the importance of the Behrens- Fisher problem from a theoretical perspective is acknowledged, but it is concluded that this problem is irrelevant for applied research in psychology, education, and related disciplines. The focus is better placed on “shift in location” and, more importantly, “shift in location and change in scale” treatment alternatives

    The Impact Of Nested Testing On Experiment-Wise Type I Error Rate

    Get PDF
    When conducting a statistical test the initial risk that must be considered is a Type I error, also known as a false positive. The Type I error rate is set by nominal alpha, assuming all underlying conditions of the statistic are met. Experiment-wise Type I error inflation occurs when multiple tests are conducted overall for a single experiment. There is a growing trend in the social and behavioral sciences utilizing nested designs. A Monte Carlo study was conducted using a two layer design. Five theoretical distributions and four real datasets taken from Micceri (1989) were used, each with five different samples sizes and conducted with nominal alpha set to 0.05 and 0.01. These were conducted both unconditionally and conditionally. All permutations were executed for 1,000,000 repetitions. It was found that when conducted unconditionally, the experiment-wise Type I error rate increases from alpha = 0.05 to 0.10 and 0.01 increases to 0.02. Conditionally, it is extremely unlikely to ever find results for the factor, as it requires a statistically significant nest as a precursor, which leads to extremely reduced power. Hence, caution should be used when interpreting nested designs

    Combining Quantum Mechanical Calculations And A χ^2 Fit In A Potential Energy Function For The CO_2 + O^+ Reaction

    Get PDF
    In order to compute a highly accurate statistical rate constant for the CO2 + O+ reaction, it is necessary to first calculate the potential energy of the system at many different geometric configurations. Quantum mechanical calculations are very time-consuming, making it difficult to obtain a sufficient number to allow for accurate interpolation. The number of quantum mechanical calculations required can be significantly reduced by using known relations in classical physics to calculate energy for configurations where the oxygen is relatively far from the CO2. A chi-squared fit to quantum mechanical points is obtained for these configurations, and the resulting parameters are used to generate an equation for the potential energy. This equation, combined with an interpolated set of quantum mechanical points to give the potential energy for configurations where the molecules are closer together, allows all configurations to be calculated accurately and efficiently

    Fermat, Schubert, Einstein, and Behrens-Fisher: The Probable Difference Between Two Means When σ\u3csub\u3e1\u3c/sub\u3e\u3csup\u3e2\u3c/sup\u3e≠ σ\u3csub\u3e2\u3c/sub\u3e\u3csup\u3e2\u3c/sup\u3e

    Get PDF
    The history of the Behrens-Fisher problem and some approximate solutions are reviewed. In outlining relevant statistical hypotheses on the probable difference between two means, the importance of the Behrens- Fisher problem from a theoretical perspective is acknowledged, but it is concluded that this problem is irrelevant for applied research in psychology, education, and related disciplines. The focus is better placed on “shift in location” and, more importantly, “shift in location and change in scale” treatment alternatives

    Mathmatics in Volume I of \u3cem\u3eScripta Universitatis\u3c/em\u3e

    Get PDF
    Immanuel Velikovsky’s journal, Scripta Universitatis, edited by Albert Einstein and first published in 1923, played a significant role in the establishment of the library, and hence, Hebrew University in Jerusalem. The inaugural issue contained an article by the French mathematician Jacques Hadamard. Excerpts from Velikovsky’s diary pertaining to the rationale for the creation of the journal, and the interest in Jewish scholars such as Hadamard, are translated here

    Adversarial attacks hidden in plain sight

    Full text link
    Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. Several defensive approaches increase robustness against adversarial attacks, demanding attacks of greater magnitude, which lead to visible artifacts. By considering human visual perception, we compose a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer. We carry out a user study on classifying adversarially modified images to validate the perceptual quality of our approach and find significant evidence for its concealment with regards to human visual perception
    corecore