41 research outputs found

    A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness

    Full text link
    People increasingly use videos on the Web as a source for learning. To support this way of learning, researchers and developers are continuously developing tools, proposing guidelines, analyzing data, and conducting experiments. However, it is still not clear what characteristics a video should have to be an effective learning medium. In this paper, we present a comprehensive review of 257 articles on video-based learning for the period from 2016 to 2021. One of the aims of the review is to identify the video characteristics that have been explored by previous work. Based on our analysis, we suggest a taxonomy which organizes the video characteristics and contextual aspects into eight categories: (1) audio features, (2) visual features, (3) textual features, (4) instructor behavior, (5) learners activities, (6) interactive features (quizzes, etc.), (7) production style, and (8) instructional design. Also, we identify four representative research directions: (1) proposals of tools to support video-based learning, (2) studies with controlled experiments, (3) data analysis studies, and (4) proposals of design guidelines for learning videos. We find that the most explored characteristics are textual features followed by visual features, learner activities, and interactive features. Text of transcripts, video frames, and images (figures and illustrations) are most frequently used by tools that support learning through videos. The learner activity is heavily explored through log files in data analysis studies, and interactive features have been frequently scrutinized in controlled experiments. We complement our review by contrasting research findings that investigate the impact of video characteristics on the learning effectiveness, report on tasks and technologies used to develop tools that support learning, and summarize trends of design guidelines to produce learning video

    From Student Questions to Student Profiles in a Blended Learning Environment

    Get PDF
    International audienceThe analysis of student questions can be used to improve the learning experience for both students and teachers. We investigated questions (N = 6457) asked before the class by first-year medicine/pharmacy students on an online platform, used by professors to prepare for Q&A sessions. Our long-term objectives are to help professors in categorizing those questions, and to provide students with feedback on the quality of their questions. To do so, we developed a coding scheme and then used it for automatic annotation of the whole corpus. We identified student characteristics from the typology of questions they asked using the k-means algorithm over four courses. Students were clustered based on question dimensions only. Then, we characterized the clusters by attributes not used for clustering, such as student grade, attendance, and number and popularity of questions asked. Two similar clusters always appeared (lower than average students with popular questions, and higher than average students with unpopular questions). We replicated these analyses on the same courses across different years to show the possibility of predicting student profiles online. This work shows the usefulness and validity of our coding scheme and the relevance of this approach to identify different student profiles. Notes for Practice • Questions provide important insights into students' level of knowledge, but coding schemes are lacking to study this phenomenon. • After providing a bottom-up coding scheme of student questions in a blended environment, we analyzed the relationship between the questions asked and the student profiles. • Profiling students based on their questions over a year allows us to predict the profiles of future students to help the teacher understand who asks what. • These results provide both a coding scheme that can be reused in various contexts involving questions, and a methodology that can be replicated in any context where students ask many questions, in particular to help the teacher in prioritizing them according to their own criteria. • Teachers need to focus on the nature of questions asked by their students, because they can reveal information about their profile (attendance, activity, etc.)

    The Big Five:Addressing Recurrent Multimodal Learning Data Challenges

    Get PDF
    The analysis of multimodal data in learning is a growing field of research, which has led to the development of different analytics solutions. However, there is no standardised approach to handle multimodal data. In this paper, we describe and outline a solution for five recurrent challenges in the analysis of multimodal data: the data collection, storing, annotation, processing and exploitation. For each of these challenges, we envision possible solutions. The prototypes for some of the proposed solutions will be discussed during the Multimodal Challenge of the fourth Learning Analytics & Knowledge Hackathon, a two-day hands-on workshop in which the authors will open up the prototypes for trials, validation and feedback

    Multimodal Challenge: Analytics Beyond User-computer Interaction Data

    Get PDF
    This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and practice-based learning experiences. This mission, pursued by the multimodal learning analytics (MMLA) community, seeks to bridge the gap between digital and physical learning spaces. The “multimodal” approach consists in combining learners’ motoric actions with physiological responses and data about the learning contexts. These data can be collected through multiple wearable sensors and Internet of Things (IoT) devices. This Hackathon table will confront with three main challenges arising from the analysis and valorisation of multimodal datasets: 1) the data collection and storing, 2) the data annotation, 3) the data processing and exploitation. Some research questions which will be considered in this Hackathon challenge are the following: how to process the raw sensor data streams and extract relevant features? which data mining and machine learning techniques can be applied? how can we compare two action recordings? How to combine sensor data with Experience API (xAPI)? what are meaningful visualisations for these data

    The design of the sample policy for utilizing educational data

    Get PDF
    教育・学習活動に係るデータが様々な形で電子的に記録されるようになった現在,記録されているデータを分析し教育支援に活かすことが考えられるが,それらのデータを適切に扱うための指針は存在していない.著者らはこのような状況を踏まえ,「教育・学習データ利活用ポリシー」のひな型の策定を提案し,大学ICT推進協議会理事会における審議を依頼している.本稿では検討の過程で寄せられた意見を総括し,個人情報保護法令で定義されている匿名加工情報の作成について対応を行ったことを報告する.We have been studied about the policy for educational data usage for research including Learning Analytics. Because there are no rules for educational data usage even we can aggregate large-scale educational data in order to help learners. We propose “the sample policy for utilizing educational data” in cooporation with the administrative board of Academic exchange for Information Environment and Strategy (AXIES). In this talk, we report on the update of our proposal that factor in Personal Information and Anonymously Processed Information defined in the Act on the Protection of Personal Information

    Analytics of student interactions: towards theory-driven, actionable insights

    Get PDF
    The field of learning analytics arose as a response to the vast quantities of data that are increasingly generated about students, their engagement with learning resources, and their learning and future career outcomes. While the field began as a collage, adopting methods and theories from a variety of disciplines, it has now become a major area of research, and has had a substantial impact on practice, policy, and decision-making. Although the field supports the collection and analysis of a wide array of data, existing work has predominantly focused on the digital traces generated through interactions with technology, learning content, and other students. Yet for any analyses to support students and teachers, the measures derived from these data must (1) offer practical and actionable insight into learning processes and outcomes, and (2) be theoretically grounded. As the field has matured, a number of challenges related to these criteria have become apparent. For instance, concerns have been raised that the literature prioritises predictive modeling over ensuring that these models are capable of informing constructive actions. Furthermore, the methodological validity of much of this work has been challenged, as a swathe of recent research has found many of these models fail to replicate to novel contexts. The work presented in this thesis addresses both of these concerns. In doing so, our research is pervaded by three key concerns: firstly, ensuring that any measures developed are both structurally valid and generalise across contexts; secondly, providing actionable insight with regards to student engagement; and finally, providing representations of student interactions that are predictive of student outcomes, namely, grades and students’ persistence in their studies. This research programme is heavily indebted to the work of Vincent Tinto, who conceptually distinguishes between the interactions students have with the academic and social domains present within their educational institution. This model has been subjected to extensive empirical validation, using a range of methods and data. For instance, while some studies have relied upon survey responses, others have used social network metrics, demographic variables, and students’ time spent in class together to evaluate Tinto’s claims. This model provides a foundation for the thesis, and the work presented may be categorised into two distinct veins aligning with the academic and social aspects of integration that Tinto proposes. These two domains, Tinto argues, continually modify a student’s goals and commitments, resulting in persistence or eventual disengagement and dropout. In the former, academic domain, we present a series of novel methodologies developed for modeling student engagement with academic resources. In doing so, we assessed how an individual student’s behaviour may be modeled using hidden Markov models (HMMs) to provide representations that enable actionable insight. However, in the face of considerable individual differences and cross-course variation, the validity of such methods may be called into question. Accordingly, ensuring that any measurements of student engagement are both structurally valid, and generalise across course contexts and disciplines became a central concern. To address this, we developed our model of student engagement using sticky-HMMs, emphasised the more interpretable insight such an approach provides compared to competing models, demonstrated its cross-course generality, and assessed its structural validity through the successful prediction of student dropout. In the social domain, a critical concern was to ensure any analyses conducted were valid. Accordingly, we assessed how the diversity of social tie definitions may undermine the validity of subsequent modeling practices. We then modeled students’ social integration using graph embedding techniques, and found that not only are student embeddings predictive of their final grades, but also of their persistence in their educational institution. In keeping with Tinto’s model, our research has focused on academic and social interactions separately, but both avenues of investigation have led to the question of student disengagement and dropout, and how this may be represented and remedied through the provision of actionable insight

    Digital Disruption in Teaching and Testing

    Get PDF
    This book provides a significant contribution to the increasing conversation concerning the place of big data in education. Offering a multidisciplinary approach with a diversity of perspectives from international scholars and industry experts, chapter authors engage in both research- and industry-informed discussions and analyses on the place of big data in education, particularly as it pertains to large-scale and ongoing assessment practices moving into the digital space. This volume offers an innovative, practical, and international view of the future of current opportunities and challenges in education and the place of assessment in this context
    corecore