How can we decide whether some new tool or approach is valuable? Do published results of empirical research help? This paper challenges strongly entrenched beliefs and practices in educational research and evaluation. It urges practitioners and researchers to question both results and underlying paradigms. Much published research about education and the impact of technology is pseudo‐scientific; it draws unwarranted conclusions based on conceptual blunders, inadequate design, so‐called measuring instruments that do not measure, and/or use of inappropriate statistical tests. An unacceptably high portion of empirical papers makes at least two of these errors, thus invalidating the reported conclusions
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.