41,648 research outputs found

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in

    Data Mining in Online Professional Development Program Evaluation: An Exploratory Case Study

    Get PDF
    This case study explored the potential applications of data mining in the educational program evaluation of online professional development workshops for pre K-12 teachers. Multiple data mining analyses were implemented in combination with traditional evaluation instruments and student outcomes to determine learner engagement and more clearly understand the relationship between logged activities and learner experiences. Data analysis focused on the following aspects: 1) Shared learning characteristics, 2) frequent learning paths, 3) engagement prediction, 4) expectation prediction, 5) workshop satisfaction prediction, and 6) instructor quality prediction. Results indicated that interaction and engagement were important factors in learning outcomes for this workshop. In addition, participants who had online teaching experience could be expected to have a higher engagement level but prior online learning experience did NOT show a similar relationship

    Effects of process vs. outcome accountability, responsibility, and indentifiability on solution quality

    Get PDF
    This study investigated the effect of accountability, responsibility, and identifiability on the quality of solutions generated to an ill-defined problem. Accountable participants provided written justification for their output, either the solution generation process (process accountability) or the solution generation outcome (outcome accountability). Participants perceived themselves as either sharing responsibility for solution generation with others (shared responsibility) or solely responsible for solution generation (sole responsibility). Lastly, participants were either identifiable, such that their responses could be traced to them personally, or anonymous. Solution quality was measured by resolving power, or the degree to which a solution resolves conflicting aspects of the problem. All participants were asked to read an ill-defined problem, generate as many / solutions as possible to the problem, and choose the solution they felt was best. No predictions were supported and a number of unexpected findings occurred. Unaccountable and outcome accountability participants each generated higher quality best solutions than participants in the process accountability conditions. Participants who shared responsibility generated a higher number of resolving alternatives and a greater proportion of resolving alternatives than participants who were solely responsible for solution generation. Lastly, an interaction between identifiability and accountability was discovered for the proportion of resolving alternatives. Post-hoc comparisons revealed that highly identifiable but unaccountable participants generated a higher proportion of resolving solutions than highly identifiable participants in either outcome or process accountability conditions. Implications for individual and group problem solving and suggestions for future research are discussed

    Assessing collaborative learning: big data, analytics and university futures

    Get PDF
    Traditionally, assessment in higher education has focused on the performance of individual students. This focus has been a practical as well as an epistemic one: methods of assessment are constrained by the technology of the day, and in the past they required the completion by individuals under controlled conditions, of set-piece academic exercises. Recent advances in learning analytics, drawing upon vast sets of digitally-stored student activity data, open new practical and epistemic possibilities for assessment and carry the potential to transform higher education. It is becoming practicable to assess the individual and collective performance of team members working on complex projects that closely simulate the professional contexts that graduates will encounter. In addition to academic knowledge this authentic assessment can include a diverse range of personal qualities and dispositions that are key to the computer-supported cooperative working of professionals in the knowledge economy. This paper explores the implications of such opportunities for the purpose and practices of assessment in higher education, as universities adapt their institutional missions to address 21st Century needs. The paper concludes with a strong recommendation for university leaders to deploy analytics to support and evaluate the collaborative learning of students working in realistic contexts

    Predicting college student classroom performance with a simple metacomprehension scale

    Get PDF
    The present study investigated the relationship between college student metacomprehension and the error of predicted classroom performance. College student metacomprehension was evaluated using the Metacomprehension Scale (MCS) designed by Moore, Zabrucky, and Commander (1997a). Prior to an examination administered by a course instructor, covering course content, students predicted the percentage score he/she expected to achieve. The predicted score was subtracted from the obtained score generating an error score. It was hypothesized that error of predicted classroom performance is a function of student metacomprehension, as measured by the MCS. Results indicate the MCS was not a reliable indicator of student predicted performance. Factor structure of the MCS was examined to consider why the MCS was not a significant predictor of college student error scores
    • …
    corecore