80,583 research outputs found

    Peeking into the black box: visualising learning activities

    Get PDF
    Learning analytics has emerged as the discipline that fosters the learning process based on monitored data. As learning is a complex process that is not limited to a single environment, it benefits from a holistic approach where events in different contexts and settings are observed and combined. This work proposes an approach to increase this coverage. Detailed information is obtained by combining logs from a LMS and events recorded with a virtual machine given to the students. A set of visualisations is then derived from the collected events showing previously hidden aspects of an experience that can be shown to the teaching staff for their consideration. The visualisations presented focus on different learning outcomes, such as self learning, use of industrial tools, time management, information retrieval, collaboration, etc. Depending on the information to convey, different types of visualisations are considered, ranging from graphs to starbusts and from scatter plots to heatmaps.Work partially funded by the projects: Adaptation of learning scenarios in the .LRN platform based on Contextualized Attention Metadata (CAM) (DE2009-0051), Learn3 (\Plan Nacional de I+D+I" TIN2008-05163/TSI), EEE (\Plan Nacional de I+D+I" TIN 2011-28308-C03-01), and Emadrid: InvestigaciĂłn y desarrollo de tecnologĂ­as para el e-learning en la Comunidad de Madrid (S2009/TIC-1650).Publicad

    Applying science of learning in education: Infusing psychological science into the curriculum

    Get PDF
    The field of specialization known as the science of learning is not, in fact, one field. Science of learning is a term that serves as an umbrella for many lines of research, theory, and application. A term with an even wider reach is Learning Sciences (Sawyer, 2006). The present book represents a sliver, albeit a substantial one, of the scholarship on the science of learning and its application in educational settings (Science of Instruction, Mayer 2011). Although much, but not all, of what is presented in this book is focused on learning in college and university settings, teachers of all academic levels may find the recommendations made by chapter authors of service. The overarching theme of this book is on the interplay between the science of learning, the science of instruction, and the science of assessment (Mayer, 2011). The science of learning is a systematic and empirical approach to understanding how people learn. More formally, Mayer (2011) defined the science of learning as the “scientific study of how people learn” (p. 3). The science of instruction (Mayer 2011), informed in part by the science of learning, is also on display throughout the book. Mayer defined the science of instruction as the “scientific study of how to help people learn” (p. 3). Finally, the assessment of student learning (e.g., learning, remembering, transferring knowledge) during and after instruction helps us determine the effectiveness of our instructional methods. Mayer defined the science of assessment as the “scientific study of how to determine what people know” (p.3). Most of the research and applications presented in this book are completed within a science of learning framework. Researchers first conducted research to understand how people learn in certain controlled contexts (i.e., in the laboratory) and then they, or others, began to consider how these understandings could be applied in educational settings. Work on the cognitive load theory of learning, which is discussed in depth in several chapters of this book (e.g., Chew; Lee and Kalyuga; Mayer; Renkl), provides an excellent example that documents how science of learning has led to valuable work on the science of instruction. Most of the work described in this book is based on theory and research in cognitive psychology. We might have selected other topics (and, thus, other authors) that have their research base in behavior analysis, computational modeling and computer science, neuroscience, etc. We made the selections we did because the work of our authors ties together nicely and seemed to us to have direct applicability in academic settings

    Automatic Discovery of Complementary Learning Resources

    Get PDF
    Proceedings of: 6th European Conference of Technology Enhanced Learning, EC-TEL 2011, Palermo, Italy, September 20-23, 2011.Students in a learning experience can be seen as a community working simultaneously (and in some cases collaboratively) in a set of activities. During these working sessions, students carry out numerous actions that affect their learning. But those actions happening outside a class or the Learning Management System cannot be easily observed. This paper presents a technique to widen the observability of these actions. The set of documents browsed by the students in a course was recorded during a period of eight weeks. These documents are then processed and the set with highest similarity with the course notes are selected and recommended back to all the students. The main problem is that this user community visits thousands of documents and only a small percent of them are suitable for recommendation. Using a combination of lexican analysis and information retrieval techniques, a fully automatic procedure to analyze these documents, classify them and select the most relevant ones is presented. The approach has been validated with an empirical study in an undergraduate engineering course with more than one hundred students. The recommended resources were rated as "relevant to the course" by the seven instructors with teaching duties in the course.Work partially funded by the Learn3 project, “Plan Nacional de I+D+I TIN2008-05163/TSI”, the Acción Integrada Ref. DE2009-0051, the “Emadrid: Investigación y desarrollo de tecnologías para el e-learning en la Comunidad de Madrid” project (S2009/TIC-1650) and TELMA Project (Plan Avanza TSI-020110-2009-85)

    Lucene4IR: Developing information retrieval evaluation resources using Lucene

    Get PDF
    The workshop and hackathon on developing Information Retrieval Evaluation Resources using Lucene (L4IR) was held on the 8th and 9th of September, 2016 at the University of Strathclyde in Glasgow, UK and funded by the ESF Elias Network. The event featured three main elements: (i) a series of keynote and invited talks on industry, teaching and evaluation; (ii) planning, coding and hacking where a number of groups created modules and infrastructure to use Lucene to undertake TREC based evaluations; and (iii) a number of breakout groups discussing challenges, opportunities and problems in bridging the divide between academia and industry, and how we can use Lucene for teaching and learning Information Retrieval (IR). The event was composed of a mix and blend of academics, experts and students wanting to learn, share and create evaluation resources for the community. The hacking was intense and the discussions lively creating the basis of many useful tools but also raising numerous issues. It was clear that by adopting and contributing to most widely used and supported Open Source IR toolkit, there were many benefits for academics, students, researchers, developers and practitioners - providing a basis for stronger evaluation practices, increased reproducibility, more efficient knowledge transfer, greater collaboration between academia and industry, and shared teaching and training resources

    Combining Terrier with Apache Spark to Create Agile Experimental Information Retrieval Pipelines

    Get PDF
    Experimentation using IR systems has traditionally been a procedural and laborious process. Queries must be run on an index, with any parameters of the retrieval models suitably tuned. With the advent of learning-to-rank, such experimental processes (including the appropriate folding of queries to achieve cross-fold validation) have resulted in complicated experimental designs and hence scripting. At the same time, machine learning platforms such as Scikit Learn and Apache Spark have pioneered the notion of an experimental pipeline , which naturally allows a supervised classification experiment to be expressed a series of stages, which can be learned or transformed. In this demonstration, we detail Terrier-Spark, a recent adaptation to the Terrier Information Retrieval platform which permits it to be used within the experimental pipelines of Spark. We argue that this (1) provides an agile experimental platform for information retrieval, comparable to that enjoyed by other branches of data science; (2) aids research reproducibility in information retrieval by facilitating easily-distributable notebooks containing conducted experiments; and (3) facilitates the teaching of information retrieval experiments in educational environments

    ON MONITORING LANGUAGE CHANGE WITH THE SUPPORT OF CORPUS PROCESSING

    Get PDF
    One of the fundamental characteristics of language is that it can change over time. One method to monitor the change is by observing its corpora: a structured language documentation. Recent development in technology, especially in the field of Natural Language Processing allows robust linguistic processing, which support the description of diverse historical changes of the corpora. The interference of human linguist is inevitable as it determines the gold standard, but computer assistance provides considerable support by incorporating computational approach in exploring the corpora, especially historical corpora. This paper proposes a model for corpus development, where corpus are annotated to support further computational operations such as lexicogrammatical pattern matching, automatic retrieval and extraction. The corpus processing operations are performed by local grammar based corpus processing software on a contemporary Indonesian corpus. This paper concludes that data collection and data processing in a corpus are equally crucial importance to monitor language change, and none can be set aside

    Implementation of best practices in online learning: A review and future directions

    Get PDF
    Best practices for helping students learn and retain information have been well established by research in cognitive science (Brown, Roediger, & McDaniel, 2014; Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). Specifically, repeated testing has been shown in numerous instances to enhance recall. In particular, we know that students retain information best when it has been recalled versus re-studied (Butler, 2010) and rehearsed with delayed (spaced) versus massed presentation (Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006), and when the items to be studied and later tested are similarly framed (McDaniel, Wildman, & Anderson, 2012). Although these effects were initially demonstrated in laboratory settings, a number of researchers have shown that they generalize to classroom environments (e.g. Vlach & Sandhofer, 2012) and some have demonstrated their utility in fully online courses as well (McDaniel et al., 2012). However, in multiple studies we have found that implementing some of these best practices using publisher-provided textbook technology supplements (TTS) does not meaningfully improve recall (Bell, Simone & Whitfield, 2015; 2016), at least when these supplements are used “out-of-the-box” in face-to-face courses. We conclude when using TTS in an online environment there is a mismatch between student and faculty goals, in that students are motivated by short-term goals of getting high score of a quiz even if the behaviors used to achieve that score do not enhance long-term recall or generalization of the learned material, which typically are the goals of faculty. We argue that TTS can be reconfigured to reinforce meaningful engagement with the material for all students, regardless of learning history or other individual differences of students (e.g., Gluckman, Vlach & Sandhofer, 2014). Actually, in order to continue to require the purchase of these TTS by students, we should determine whether their use is beneficial to all types of students. A related empirical question is whether recall of factual information in an online environment is correlated with the later ability to use that information in a novel situation (generalizability). Whereas some researchers have found that factual information learned via repeated testing does help students to draw inferences about the implications of those facts in later testing (Butler, 2010), others have failed to find a correlation between testing effects and generalizability of the learned material (Gluckman et al., 2014). The literature on this question is still somewhat small, however, (see Carpenter, 2012, for a brief review) and this is particularly true of investigations involving online learning. In this paper we review the existing literature of the spacing benefit and online learning. We end with a proposal for the need of new research specific to the online environment that manipulates delayed repeated testing and examines whether successful retention of factual information promotes long-term application of that material
    • …
    corecore