1,831 research outputs found
Stability and sensitivity of Learning Analytics based prediction models
Learning analytics seek to enhance the learning processes through systematic measurements of learning related data and to provide informative feedback to learners and educators. Track data from Learning Management Systems (LMS) constitute a main data source for learning analytics. This empirical contribution provides an application of Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics: an infrastructure that combines learning dispositions data with data extracted from computer-assisted, formative assessments and LMSs. In two cohorts of a large introductory quantitative methods module, 2049 students were enrolled in a module based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials. We investigated the predictive power of learning dispositions, outcomes of continuous formative assessments and other system generated data in modelling student performance and their potential to generate informative feedback. Using a dynamic, longitudinal perspective, computer-assisted formative assessments seem to be the best predictor for detecting underperforming students and academic performance, while basic LMS data did not substantially predict learning. If timely feedback is crucial, both use-intensity related track data from e-tutorial systems, and learning dispositions, are valuable sources for feedback generation
Recommended from our members
Practitioner Track Proceedings of the 6th International Learning Analytics & Knowledge Conference (LAK16)
Practitioners spearhead a significant portion of learning analytics, relying on implementation and experimentation rather than on traditional academic research. Both approaches help to improve the state of the art. The LAK conference has created a practitioner track for submissions, which first ran in 2015 as an alternative to the researcher track.
The primary goal of the practitioner track is to share thoughts and findings that stem from learning analytics project implementations. While both large and small implementations are considered, all practitioner track submissions are required to relate to initiatives that are designed for large-scale and/or long-term use (as opposed to research-focused initiatives). Other guidelines include:
• Implementation track record The project should have been used by an institution or have been deployed on a learning site. There are no hard guidelines about user numbers or how long the project has been running.
• Learning/education related Submissions have to describe work that addresses learning/academic analytics, either at an educational institution or in an area (such as corporate training, health care or informal learning) where the goal is to improve the learning environment or learning outcomes.
• Institutional involvement Neither submissions nor presentations have to include a named person from an academic institution. However, all submissions have to include information collected from people who have used the tool or initiative in a learning environment (such as faculty, students, administrators and trainees).
• No sales pitches While submissions from commercial suppliers are welcome; reviewers do not accept overt (or covert) sales pitches. Reviewers look for evidence that a presentation will take into account challenges faced, problems that have arisen, and/or user feedback that needs to be addressed.
Submissions are limited to 1,200 words, including an abstract, a summary of deployment with end users, and a full description. Most papers in the proceedings are therefore short, and often informal, although some authors chose to extend their papers once they had been accepted.
Papers accepted in 2016 fell into two categories.
• Practitioner Presentations Presentation sessions are designed to focus on deployment of a single learning analytics tool or initiative.
• Technology Showcase The Technology Showcase event enables practitioners to demonstrate new and emerging learning analytics technologies that they are piloting or deploying.
Both types of paper are included in these proceedings
Student profiling in a dispositional learning analytics application using formative assessment
How learning disposition data can help us translating learning feedback from a learning analytics application into actionable learning interventions, is the main focus of this empirical study. It extends previous work where the focus was on deriving timely prediction models in a data rich context, encompassing trace data from learning management systems, formative assessment data, e-tutorial trace data as well as learning dispositions. In this same educational context, the current study investigates how the application of cluster analysis based on e-tutorial trace data allows student profiling into different at-risk groups, and how these at-risk groups can be characterized with the help of learning disposition data. It is our conjecture that establishing a chain of antecedent-consequence relationships starting from learning disposition, through student activity in e-tutorials and formative assessment performance, to course performance, adds a crucial dimension to current learning analytics studies: that of profiling students with descriptors that easily lend themselves to the design of educational interventions
Recommended from our members
Towards a Domain – Specific Comparative Analysis of Data Mining Tools
Advancement in technology has brought in widespread adoption and utilization of data mining tools. Successful implementation of data mining requires a careful assessment of the various data mining tools. Although several works have compared data mining tools based on usability, opensource, integrated data mining tools for statistical analysis, big/small scale, and data visualization, none of them has suggested the tools for various industry-sectors. This paper attempts to provide a comparative study of various data mining tools based on popularity and usage among various industry-sectors such as business, education, and healthcare. The factors used in the comparison are performance and scalability, data access, data preparation, data exploration and visualization, advanced modeling capabilities, programming language, operating system, interfaces, ease of use, and price/license. The following popular data mining tools are assessed: SAS Enterprise Miner, KNIME, and R for business, Moodle Learning Analytics, Blackboard Analytics, and Canvas for education, and RapidMiner, IBM Watson Health, and Tableau for healthcare. It also discusses the critical issues and challenges associated with the adoption of data mining tools. Furthermore, it suggests possible solutions to help various industries choose the best data mining tool that covers their respective data mining requirements
- …