257 research outputs found

    The effect of functional roles on group efficiency

    Get PDF
    The usefulness of ‘roles’ as a pedagogical approach to support small group performance can be often read, however, their effect is rarely empirically assessed. Roles promote cohesion and responsibility and decrease so-called ‘process losses’ caused by coordination demands. In addition, roles can increase awareness of intra-group interaction. In this article, the effect of functional roles on group performance, efficiency and collaboration during computer-supported collaborative learning (CSCL) was investigated with questionnaires and quantitative content analysis of e-mail communication. A comparison of thirty-three questionnaire observations, distributed over ten groups in two research conditions: role (n = 5, N = 14) and non-role (n = 5, N = 19), revealed no main effect for performance (grade). A latent variable was interpreted as ‘perceived group efficiency’ (PGE). Multilevel modelling (MLM) yielded a positive marginal effect of PGE. Groups in the role condition appear to be more aware of their efficiency, compared to groups in the ‘non-role’ condition, regardless whether the group performs well or poor. Content analysis reveals that students in the role condition contribute more ‘task content’ focussed statements. This is, however, not as hypothesised due to the premise that roles decrease coordination and thus increase content focused statements; in fact, roles appear to stimulate coordination and simultaneously the amount of ‘task content’ focussed statements increases

    The effect of functional roles on perceived group efficiency during computer-supported collaborative learning

    Get PDF
    In this article, the effect of functional roles on group performance and collaboration during computer-supported collaborative learning (CSCL) is investigated. Especially the need for triangulating multiple methods is emphasised: Likert-scale evaluation questions, quantitative content analysis of e-mail communication and qualitative analysis of open-ended questions were used. A comparison of fourty-one questionnaire observations, distributed over thirteen groups in two research conditions – groups with prescribed functional roles (n = 7, N = 18) and nonrole groups (n = 6, N = 23) – revealed no main effect for performance (grade). Principal axis factoring of the Likert-scales revealed a latent variable that was interpreted as perceived group efficiency (PGE). Multilevel modelling (MLM) yielded a positive marginal effect of PGE. Most groups in the role condition report a higher degree of PGE than nonrole groups. Content analysis of e-mail communication of all groups in both conditions (role n = 7, N = 25; nonrole n = 6, N = 26) revealed that students in role groups contribute more ‘coordination’ focussed statements. Finally, results from cross case matrices of student responses to open-ended questions support the observed marginal effect that most role groups report a higher degree of perceived group efficiency than nonrole groups

    The future of student self-assessment: a review of known unknowns and potential directions

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10648-015-9350-2This paper reviews current known issues in student self-assessment (SSA) and identifies five topics that need further research: (1) SSA typologies, (2) accuracy, (3) role of expertise, (4) SSA and teacher/curricular expectations, and (5) effects of SSA for different students. Five SSA typologies were identified showing that there are different conceptions on the SSA components but the field still uses SSA quite uniformly. A significant amount of research has been devoted to SSA accuracy, and there is a great deal we know about it. Factors that influence accuracy and implications for teaching are examined, with consideration that students’ expertise on the task at hand might be an important prerequisite for accurate self-assessment. Additionally, the idea that SSA should also consider the students’ expectations about their learning is reflected upon. Finally, we explored how SSA works for different types of students and the challenges of helping lower performers. This paper sheds light on SSA research needs to address the known unknowns in this fieldFirst author funding via Ramón y Cajal program by the Spanish Ministerio de Economía y Competitividad (Referencia: RYC-2013-13469) is acknowledge

    Teacher AfL perceptions and feedback practices in mathematics education among secondary schools in Tanzania

    Get PDF
    Feedback that monitors and scaffolds student learning has been shown to support learning. This study investigates the effect of mathematics teachers' perceptions of Formative Assessment (FA) and Assessment for Learning (AfL) and their conceptions of assessment on the quality of their feedback practices. The study was conducted in 48 secondary schools in Tanzania with 54 experienced mathematics teachers teaching Grade 11 (Form three in the Tanzanian system). Validated questionnaires were combined with interviews to investigate mathematics teachers' perceptions, conceptions, and feedback practices. Data were analysed by structural equation modeling and content analysis techniques. Results from the structural equation model indicated that mathematics teachers' perceptions of FA and AfL and their conceptions of assessment purposes positively predicted the quality of their feedback practices. Interview results illustrated that mathematics teachers used their students' assessment information for both formative and summative purposes. Future interventions for improving the quality of mathematics teacher's feedback practices are proposed.</p

    Content analysis: What are they talking about?

    Get PDF
    Quantitative content analysis is increasingly used to surpass surface level analyses in Computer-Supported Collaborative Learning (e.g., counting messages), but critical reflection on accepted practice has generally not been reported. A review of CSCL conference proceedings revealed a general vagueness in definitions of units of analysis. In general, arguments for choosing a unit were lacking and decisions made while developing the content analysis procedures were not made explicit. In this article, it will be illustrated that the currently accepted practices concerning the ‘unit of meaning’ are not generally applicable to quantitative content analysis of electronic communication. Such analysis is affected by ‘unit boundary overlap’ and contextual constraints having to do with the technology used. The analysis of e-mail communication required a different unit of analysis and segmentation procedure. This procedure proved to be reliable, and the subsequent coding of these units for quantitative analysis yielded satisfactory reliabilities. These findings have implications and recommendations for current content analysis practice in CSCL research

    Designing electronic collaborative learning environments

    Get PDF
    Electronic collaborative learning environments for learning and working are in vogue. Designers design them according to their own constructivist interpretations of what collaborative learning is and what it should achieve. Educators employ them with different educational approaches and in diverse situations to achieve different ends. Students use them, sometimes very enthusiastically, but often in a perfunctory way. Finally, researchers study them and—as is usually the case when apples and oranges are compared—find no conclusive evidence as to whether or not they work, where they do or do not work, when they do or do not work and, most importantly, why, they do or do not work. This contribution presents an affordance framework for such collaborative learning environments; an interaction design procedure for designing, developing, and implementing them; and an educational affordance approach to the use of tasks in those environments. It also presents the results of three projects dealing with these three issues

    Four patients with a history of acute exacerbations of COPD: implementing the CHEST/Canadian Thoracic Society guidelines for preventing exacerbations

    Get PDF
    This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/ by/4.0

    Multilevel analysis in CSCL Research

    Get PDF
    Janssen, J., Erkens, G., Kirschner, P. A., & Kanselaar, G. (2011). Multilevel analysis in CSCL research. In S. Puntambekar, G. Erkens, & C. Hmelo-Silver (Eds.), Analyzing interactions in CSCL: Methods, approaches and issues (pp. 187-205). New York: Springer. doi:10.1007/978-1-4419-7710-6_9CSCL researchers are often interested in the processes that unfold between learners in online learning environments and the outcomes that stem from these interactions. However, studying collaborative learning processes is not an easy task. Researchers have to make quite a few methodological decisions such as how to study the collaborative process itself (e.g., develop a coding scheme or a questionnaire), on the appropriate unit of analysis (e.g., the individual or the group), and which statistical technique to use (e.g., descriptive statistics, analysis of variance, correlation analysis). Recently, several researchers have turned to multilevel analysis (MLA) to answer their research questions (e.g., Cress, 2008; De Wever, Van Keer, Schellens, & Valcke, 2007; Dewiyanti, Brand-Gruwel, Jochems, & Broers, 2007; Schellens, Van Keer, & Valcke, 2005; Strijbos, Martens, Jochems, & Broers, 2004; Stylianou-Georgiou, Papanastasiou, & Puntambekar, chapter #). However, CSCL studies that apply MLA analysis still remain relatively scarce. Instead, many CSCL researchers continue to use ‘traditional’ statistical techniques (e.g., analysis of variance, regression analysis), although these techniques may not be appropriate for what is being studied. An important aim of this chapter is therefore to explain why MLA is often necessary to correctly answer the questions CSCL researchers address. Furthermore, we wish to highlight the consequences of failing to use MLA when this is called for, using data from our own studies
    corecore