Misinterpreting P-Values in Research

Abstract

The overuse of p-values to dichotomize the results of research studies as either being either significant or non-significant has taken some investigators away from the main task of determining the size of the difference between groups and the precision with which it is measured. Presenting the results of research as statements such as “p 0.05”, “NS” or as precise p-values has the effect of oversimplifying study findings. Further information regarding the size of the difference between groups is required. Presenting confidence intervals for the difference in effect, of say two treatments, in addition to p-values, has the distinct advantage of presenting imprecision on the scale of the original measurement. A statistically significant test also does not imply that the observed difference is clinically important or meaningful

    Similar works