526 research outputs found

    Wetenswaardige wetenschapsjournalistiek

    Get PDF

    Één onderzoek is gĂ©Ă©n onderzoek : het belang van replicaties voor de psychologische wetenschap

    Get PDF
    Recent criticisms on the way psychologists analyze their data, as well as cases of scientific fraud, have led both researchers and the general public to question the reliability of psychologicalresearch. At the same time, researchers have an excellent tool at their disposal to guarantee the robustness of scientific findings: replication studies. Why do researchers rarely perform replication studies? We explain why p-values for single studies fail to provideany indication of whether observed effects are real or not. Onlycumulative science, where important effects are demonstratedrepeatedly, is able to address the challenge to guarantee thereliability of psychological findings. We highlight some novelinitiatives, such as the Open Science Framework, that aim tounderline the importance of replication studies

    Are Small Effects the Indispensable Foundation for a Cumulative Psychological Science? A Reply to Götz et al. (2022)

    Get PDF
    In the January 2022 issue of Perspectives, Götz et al. argued that small effects are “the indispensable foundation for a cumulative psychological science.” They supported their argument by claiming that (a) psychology, like genetics, consists of complex phenomena explained by additive small effects; (b) psychological-research culture rewards large effects, which means small effects are being ignored; and (c) small effects become meaningful at scale and over time. We rebut these claims with three objections: First, the analogy between genetics and psychology is misleading; second, p values are the main currency for publication in psychology, meaning that any biases in the literature are (currently) caused by pressure to publish statistically significant results and not large effects; and third, claims regarding small effects as important and consequential must be supported by empirical evidence or, at least, a falsifiable line of reasoning. If accepted uncritically, we believe the arguments of Götz et al. could be used as a blanket justification for the importance of any and all “small” effects, thereby undermining best practices in effect-size interpretation. We end with guidance on evaluating effect sizes in relative, not absolute, terms

    Diagnosing collaboration in practice-based learning: Equality and intra-individual variability of physical interactivity

    Get PDF
    Collaborative problem solving (CPS), as a teaching and learning approach, is considered to have the potential to improve some of the most important skills to prepare students for their future. CPS often differs in its nature, practice, and learning outcomes from other kinds of peer learning approaches, including peer tutoring and cooperation; and it is important to establish what identifies collaboration in problem-solving situations. The identification of indicators of collaboration is a challenging task. However, students physical interactivity can hold clues of such indicators. In this paper, we investigate two non-verbal indexes of student physical interactivity to interpret collaboration in practice-based learning environments: equality and intra-individual variability. Our data was generated from twelve groups of three Engineering students working on open-ended tasks using a learning analytics system. The results show that high collaboration groups have member students who present high and equal amounts of physical interactivity and low and equal amounts of intra-individual variability

    Machine and human observable differences in groups’ collaborative problem-solving behaviours

    Get PDF
    This paper contributes to our understanding of how to design learning analytics to capture and analyse collaborative problem-solving (CPS) in practice-based learning activities. Most research in learning analytics focuses on student interaction in digital learning environments, yet still most learning and teaching in schools occurs in physical environments. Investigation of student interaction in physical environments can be used to generate observable differences among students, which can then be used in the design and implementation of Learning Analytics. Here, we present several original methods for identifying such differences in groups CPS behaviours. Our data set is based on human observation, hand position (fiducial marker) and heads direction (face recognition) data from eighteen students working in six groups of three. The results show that the high competent CPS groups spend an equal distribution of time on their problem-solving and collaboration stages. Whereas, the low competent CPS groups spend most of their time in identifying knowledge and skill deficiencies only. Moreover, as machine observable data shows, high competent CPS groups present symmetrical contributions to the physical tasks and present high synchrony and individual accountability values. The findings have significant implications on the design and implementation of future learning analytics systems

    Null hypothesis significance testing: a short tutorial

    Get PDF
    Although thoroughly criticized, null hypothesis significance testing (NHST) remains the statistical method of choice used to provide evidence for an effect, in biological, biomedical and social sciences. In this short tutorial, I first summarize the concepts behind the method, distinguishing test of significance (Fisher) and test of acceptance (Newman-Pearson) and point to common interpretation errors regarding the p-value. I then present the related concepts of confidence intervals and again point to common interpretation errors. Finally, I discuss what should be reported in which context. The goal is to clarify concepts to avoid interpretation errors and propose reporting practices.</ns4:p
    • 

    corecore