Misinterpreting p-values in research

Abstract

The overuse of p-values to dichotomize the results of research studies as being either significant or non-significant has taken some investigators away from the main task of determining the size of the difference between groups and the precision with which it is measured. Presenting the results of research as statements such as “p 0.05”, “NS” or as precise p-values has the effect of oversimplifying study findings. Further information regarding the size of the difference between groups is required. Presenting confidence intervals for the difference in effect, of say two treatments, in addition to p-values, has the distinct advantage of presenting imprecision on the scale of the original measurement. A statistically significant test also does not imply that the observed difference is clinically important or meaningful, and their meanings are often confused

    Similar works

    This paper was published in White Rose Research Online.

    Having an issue?

    Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.