20 research outputs found
Teachers’ data literacy for learning analytics: a central predictor for digital data use in upper secondary schools
Since schools increasingly use digital platforms that provide educational data in digital formats, teacher data use, and data literacy have become a focus of educational research. One main challenge is whether teachers use digital data for pedagogical purposes, such as informing their teaching. We conducted a survey study with N = 1059 teachers in upper secondary schools in Switzerland to investigate teacher digital data use and related factors such as the available technologies in schools. Descriptive analysis of the survey responses indicated that although more than half of Swiss upper-secondary teachers agreed with having data technologies at their disposal, only one-third showed a clear tendency to use these technologies, and only one-quarter felt positively confident in improving teaching in this way. An in-depth multilevel modeling showed that teachers’ use of digital data could be predicted by differences between schools, teachers’ positive beliefs towards digital technologies (will), self-assessed data literacy (skill), and access to data technologies (tool) as well as by general factors such as frequency of using digital devices in lessons by students. Teacher characteristics, such as age and teaching experience, were minor predictors. These results show that the provision of data technologies needs to be supplemented with efforts to strengthen teacher data literacy and use in schools
Recommended from our members
Analysing performance of first year engineering students
Many students in the engineering disciplines do not complete their higher education degree and drop out. This problem is serious, especially for first-year university students. In this paper, we analyse how students earn the credits required for their successful completion of the first study year. Using the example of a European technical university with traditional classroom-based education, we identify three groups of students: those who pass, those who earn only enough credits for staying in the program, and those who fail. Important patterns can be found at the end of the first semester. We present a simple algorithm that identifies students who may benefit from early additional support, which would increase their chances of progression to the second year and improve the retention improvement for the university. The results are evaluated in four consecutive academic years. The data from years 2013/14 and 2014/15 have been used to develop and verify the prediction model. In study years 2015/16 and 2016/17 the model has been applied to predict at-risk students, where the university tutors intervened and provided additional support and a significant improvement was achieved
First-Year Engineering Students’ Strategies for Taking Exams
Student drop-out is one of the most critical issues that higher educational institutions face nowadays. The problem is significant for first-year students. These freshmen are especially at risk of failing due to the transition from different educational settings at high school. Thanks to the massive boom of Information and Communication Technologies, universities have started to collect a vast amount of study- and student-related data. Teachers can use the collected information to support students at risk of failing their studies. At the Faculty of Mechanical Engineering, Czech Technical University in Prague, the situation is no different, and first-year students are a vulnerable group similar to other institutions. The most critical part of the first year is the first exam period. One of the essential skills the student needs to develop is planning for exams. The presented research aims to explore the exam-taking patterns of first-year students. Data of 361 first-year students have been analysed and used to construct “layered” Markov chain probabilistic graphs. The graphs have revealed interesting behavioural patterns within the groups of successful and unsuccessful students.Peer Reviewe
Recommended from our members
Visualisation of key splitting milestones to support interventions
The paper presents an approach to help staff responsible for running courses by identifying key milestones in the educational process, where the paths of successful and unsuccessful students started to split. By identifying these milestones in the already finished courses, this information can be used to plan the interventions in the next runs. This is achieved by finding the earliest time when the differences in behaviour or key performance metrics of unsuccessful students start to become significant. We demonstrate this approach in two case studies, one focused on a course level analysis and the latter on a whole academic year. This suggests its generic nature and possible applicability in various Learning Analytics scenarios
Student risk identification learning model using machine learning approach
Several challenges are associated with online based learning systems, the most important of which is the lack of student motivation in various course materials and for various course activities. Further, it is important to identify student who are at risk of failing to complete the course on time. The existing models applied machine learning approach for solving it. However, these models are not efficient as they are trained using legacy data and also failed to address imbalanced data issues for both training and testing the classification approach. Further, they are not efficient for classifying new courses. For overcoming these research challenges, this work presented a novel design by training the learning model for identifying risk using current courses. Further, we present an XGBoost classification algorithm that can classify risk for new courses. Experiments are conducted to evaluate performance of proposed model. The outcome shows the proposed model attain significant performance over stat-of-art model in terms of ROC, F-measure, Precision and Recall
Teachers' trust in AI-powered educational technology and a professional development program to improve it
Evidence from various domains underlines the critical role that human factors, and especially trust, play in adopting technology by practitioners. In the case of Artificial Intelligence (AI) powered tools, the issue is even more complex due to practitioners' AI-specific misconceptions, myths and fears (e.g., mass unemployment and privacy violations). In recent years, AI has been incorporated increasingly into K-12 education. However, little research has been conducted on the trust and attitudes of K-12 teachers towards the use and adoption of AI-powered Educational Technology (AI-EdTech). This paper sheds light on teachers' trust in AI-EdTech and presents effective professional development strategies to increase teachers' trust and willingness to apply AI-EdTech in their classrooms. Our experiments with K-12 science teachers were conducted around their interactions with a specific AI-powered assessment tool (termed AI-Grader) using both synthetic and real data. The results indicate that presenting teachers with some explanations of (i) how AI makes decisions, particularly compared to the human experts, and (ii) how AI can complement and give additional strengths to teachers, rather than replacing them, can reduce teachers' concerns and improve their trust in AI-EdTech. The contribution of this research is threefold. First, it emphasizes the importance of increasing teachers' theoretical and practical knowledge about AI in educational settings to gain their trust in AI-EdTech in K-12 education. Second, it presents a teacher professional development program (PDP), as well as the discourse analysis of teachers who completed it. Third, based on the results observed, it presents clear suggestions for future PDPs aiming to improve teachers' trust in AI-EdTech
Empowering Diverse Learners through Design-Based Data Literacy Education: the Data Design Cycle Framework
Data literacy is a vital life-skill that is becoming increasingly necessary in today's society. As such, public data literacy education must become a substantive component in teaching, requiring an appropriate pedagogical model. This paper aims to present a new definition for data literacy, as well as conceptual framework for public data literacy education, drawing from tenets of design-based education and synthesized principles of data literacy education from literature. The resulting framework is termed the Data Design Cycle, and emphasizes collaboration, communication, and the development of problem-solving skills. The model is highly adaptable to different educational environments and learning styles, making it easily integrable into a variety of educational settings, including traditional classroom-based learning, online learning, and experiential learning programs. Furthermore, by promoting the principles of public teaching of data literacy and a learner-centred approach, the Data Design Cycle ensures that data literacy education is accessible to heterogeneous learners from diverse educational and socioeconomical settings
Data mining tool for academic data exploitation: literature review and first architecture proposal
Using data for making decisions is not new; companies use complex computations on customer data for business intelligence or analytics. Business intelligence techniques can discern historical patterns and trends from data and can create models that predict future trends and patterns. Analytics, broadly defined, comprises applied techniques from computer science, mathematics, and statistics for extracting usable information from very large datasets.
Data itself is not new. Data has always been generated and used to inform decision-making. However, most of this was structured and organised, through regular data collections, surveys, etc. What is new, with the invention and dominance of the Internet and the expansion of digital systems across all sectors, is the amount of unstructured data we are generating. This is what we call the digital footprint: the traces that individuals leave behind as they interact with their increasingly digital world. Data analytics is the process where data is collected and analysed in order to identify patterns, make predictions, and inform business decisions. Our capacity to perform increasingly sophisticated analytics is changing the way we make predictions and decisions, with huge potential to improve competitive intelligence. These examples suggest that the actions from data mining and analytics are always automatic, but that is less often the case.
Educational Data Mining (EDM) and Learning Analytics (LA) have the potential to make visible data that have heretofore gone unseen, unnoticed, and therefore unactionable. To help further the fields and gain value from their practical applications, the recommendations are that educators and administrators:
• Develop a culture of using data for making instructional decisions;
• Involve IT departments in planning for data collection and use;
• Be smart data consumers who ask critical questions about commercial offerings and create demand for the most useful features and uses;
• Start with focused areas where data will help, show success, and then expand to new areas;
• Communicate with students and parents about where data come from and how the data are used;
• Help align state policies with technical requirements for online learning systems.
This report documents the first steps conducted within the SPEET1 ERASMUS+ project. It describes the conceptualization of a practical tool for the application of EDM/LA techniques to currently available academic data. The document is also intended to contextualise the use of Big Data within the academic sector, with special emphasis on the role that student profiles and student clustering do have in support tutoring actions.
The report describes the promise of educational data mining (seeking patterns in data across many student actions), learning analytics (applying predictive models that provide actionable information), and visual data analytics (interactive displays of analyzed data) and how they might serve the future of personalized learning and the development and continuous improvement of adaptive systems. How might they operate in an adaptive learning system? What inputs and outputs are to be expected? In the next sections, these
questions are addressed by giving a system-level view of how data mining and analytics could improve teaching and learning by creating feedback loops.
Finally, the proposal of the key elements that conform a software application that is intended to give support to this academic data analysis is presented.
Three different key elements are presented: data, algorithms and application architecture. From one side we should have a minimum data available. The corresponding relational data base structure is presented. This basic data can always be complemented with other available data that may help to decide
or/and to explain decisions. Classification algorithms are reviewed and is presented how they can be used for the generation of the student clustering problem. A convenient software architecture will act as an umbrella that connects the previous two parts.
The document is intended to be useful for a first understanding of academic data analysis. What we can get and what we do need to do. This is the first of a series of reports that taken all together will provide a complete and consistent view towards the inclusion of data mining as a helping hand in the tutoring
action.European UnionProgramme: Erasmus+ Project Reference: 2016-1-ES01-KA203-025452info:eu-repo/semantics/draf
El Proceso de Implementación de Analíticas de Aprendizaje
With the popularity takeoff of the learning analytics area during the last decade, numerous research studies have emerged and public opinion has echoed this trend as well. However, the fact is that the impact the field has had in practice has been quite limited, and there has been little transfer to educational institutions. One of the possible causes is the high complexity of the field, and that there are no clear implementation processes; therefore, in this work, we propose a pragmatic implementation process of learning analytics in five stages: 1) learning environments, 2) raw data capture, 3) data tidying and feature engineering, 4) analysis and modelling and 5) educational application. In addition, we also review a series of transverse factors that affect this implementation, like technology, learning sciences, privacy, institutions, and educational policies. The detailed process can be helpful for researchers, educational data analysts, teachers and educational institutions that are looking to start working in this area. Achieving the true potential of learning analytics will require close collaboration and conversation between all the actors involved in their development, which might eventually lead to the desired systematic and productive implementation.Con el despegue de la popularidad del área de analítica de aprendizaje durante la última década, numerosas investigaciones han surgido y la opinión pública se ha hecho eco de esta tendencia. Sin embargo, la realidad es que el impacto que ha tenido en la práctica ha sido bastante bajo, y se está produciendo poca transferencia a las instituciones educativas. Una de las posibles causas es la elevada complejidad del campo, y que no existan procesos claros; por ello, en este trabajo, se propone un pragmático proceso de implementación de analíticas de aprendizaje en cinco etapas: 1) entornos de aprendizaje, 2) recolección de datos en crudo, 3) manipulación de datos e ingeniería de características, 4) análisis y modelos y 5) aplicación educacional. Además, se revisan una serie de factores transversales que afectan esta implementación, como la tecnología, ciencias del aprendizaje, privacidad, instituciones y políticas educacionales. El proceso que se detalla puede resultar de utilidad para investigadores, analistas de datos educacionales, educadores e instituciones educativas que busquen introducirse en el área. Alcanzar el verdadero potencial de las analíticas de aprendizaje requerirá de estrecha colaboración y conversación entre todos los actores involucrados en su desarrollo, que permita su implementación de forma sistemática y productiva