202 research outputs found

    Stability and sensitivity of Learning Analytics based prediction models

    Get PDF
    Learning analytics seek to enhance the learning processes through systematic measurements of learning related data and to provide informative feedback to learners and educators. Track data from Learning Management Systems (LMS) constitute a main data source for learning analytics. This empirical contribution provides an application of Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics: an infrastructure that combines learning dispositions data with data extracted from computer-assisted, formative assessments and LMSs. In two cohorts of a large introductory quantitative methods module, 2049 students were enrolled in a module based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials. We investigated the predictive power of learning dispositions, outcomes of continuous formative assessments and other system generated data in modelling student performance and their potential to generate informative feedback. Using a dynamic, longitudinal perspective, computer-assisted formative assessments seem to be the best predictor for detecting underperforming students and academic performance, while basic LMS data did not substantially predict learning. If timely feedback is crucial, both use-intensity related track data from e-tutorial systems, and learning dispositions, are valuable sources for feedback generation

    The Potential for Student Performance Prediction in Small Cohorts with Minimal Available Attributes

    Get PDF
    The measurement of student performance during their progress through university study provides academic leadership with critical information on each student’s likelihood of success. Academics have traditionally used their interactions with individual students through class activities and interim assessments to identify those “at risk” of failure/withdrawal. However, modern university environments, offering easy on-line availability of course material, may see reduced lecture/tutorial attendance, making such identification more challenging. Modern data mining and machine learning techniques provide increasingly accurate predictions of student examination assessment marks, although these approaches have focussed upon large student populations and wide ranges of data attributes per student. However, many university modules comprise relatively small student cohorts, with institutional protocols limiting the student attributes available for analysis. It appears that very little research attention has been devoted to this area of analysis and prediction. We describe an experiment conducted on a final-year university module student cohort of 23, where individual student data are limited to lecture/tutorial attendance, virtual learning environment accesses and intermediate assessments. We found potential for predicting individual student interim and final assessment marks in small student cohorts with very limited attributes and that these predictions could be useful to support module leaders in identifying students potentially “at risk.”.Peer reviewe

    Understanding academics’ resistance towards (online) student evaluation

    Get PDF
    Many higher educational institutions and academic staff are still sceptical about the validity and reliability of student evaluation questionnaires, in particular when these evaluations are completed online. One month after a university-wide implementation from paper to online evaluation across 629 modules, (perceived) resistance and ambivalence amongst academic staff were unpacked. A mixed-method study was conducted amongst 104 academics using survey methods and follow-up semi-structured interviews. Despite a successful ‘technical’ transition (i.e. response rate of 60%, similar scores to previous evaluations), more than half of respondents reported a negative experience with this transition. The results indicate that the multidimensional nature of ambivalence towards change and the dual nature of student evaluations can influence the effectiveness of organisational transition processes

    Eliciting students' preferences for the use of their data for learning analytics. A crowdsourcing approach.

    Get PDF
    Research on student perspectives of learning analytics suggests that students are generally unaware of the collection and use of their data by their learning institutions, and they are often not involved in decisions about whether and how their data are used. To determine the influence of risks and benefits awareness on students’ data use preferences for learning analytics, we designed two interventions: one describing the possible privacy risks of data use for learning analytics and the second describing the possible benefits. These interventions were distributed amongst 447 participants recruited using a crowdsourcing platform. Participants were randomly assigned to one of three experimental groups – risks, benefits, and risks and benefits – and received the corresponding intervention(s). Participants in the control group received a learning analytics dashboard (as did participants in the experimental conditions). Participants’ indicated the motivation for their data use preferences. Chapter 11 will discuss the implications of our findings in relation to how to better support learning institutions in being more transparent with students about the practice of learning analytics

    Eliciting students' preferences for the use of their data for learning analytics. A crowdsourcing approach.

    Get PDF
    Research on student perspectives of learning analytics suggests that students are generally unaware of the collection and use of their data by their learning institutions, and they are often not involved in decisions about whether and how their data are used. To determine the influence of risks and benefits awareness on students’ data use preferences for learning analytics, we designed two interventions: one describing the possible privacy risks of data use for learning analytics and the second describing the possible benefits. These interventions were distributed amongst 447 participants recruited using a crowdsourcing platform. Participants were randomly assigned to one of three experimental groups – risks, benefits, and risks and benefits – and received the corresponding intervention(s). Participants in the control group received a learning analytics dashboard (as did participants in the experimental conditions). Participants’ indicated the motivation for their data use preferences. Chapter 11 will discuss the implications of our findings in relation to how to better support learning institutions in being more transparent with students about the practice of learning analytics

    Analysing the correlation between social network analysis measures and performance of students in social network-based engineering education

    Get PDF
    Social network-based engineering education (SNEE) is designed and implemented as a model of Education 3.0 paradigm. SNEE represents a new learning methodology, which is based on the concept of social networks and represents an extended model of project-led education. The concept of social networks was applied in the real-life experiment, considering two different dimensions: (1) to organize the education process as a social network-based process; and (2) to analyze the students' interactions in the context of evaluation of the students learning performance. The objective of this paper is to present a new model for students evaluation based on their behavior during the course and its validation in comparison with the traditional model of students' evaluation. The validation of the new evaluation model is made through an analysis of the correlation between social network analysis measures (degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, and average tie strength) and the grades obtained by students (grades for quality of work, grades for volume of work, grades for diversity of work, and final grades) in a social network-based engineering education. The main finding is that the obtained correlation results can be used to make the process of the students' performance evaluation based on students interactions (behavior) analysis, to make the evaluation partially automatic, increasing the objectivity and productivity of teachers and allowing a more scalable process of evaluation. The results also contribute to the behavioural theory of learning performance evaluation. More specific findings related to the correlation analysis are: (1) the more different interactions a student had (degree centrality) and the more frequently the student was between the interaction paths of other students (betweenness centrality), the better was the quality of the work; (2) all five social network measures had a positive and strong correlation with the grade for volume of work and with the final graThe authors wish to acknowledge the support of the Fundacao para a Ciencia e Tecnologia (FCT), Portugal, through the Grants "Projeto Estrategico-UI 252-2011-2012'' reference PEst-OE/EME/UI0252/2011, "Ph.D. Scholarship Grant'' reference SFRH/BD/85672/2012, and the support of Parallel Planes Lda.info:eu-repo/semantics/publishedVersio
    corecore