14,152 research outputs found
An Online Tutor for Astronomy: The GEAS Self-Review Library
We introduce an interactive online resource for use by students and college
instructors in introductory astronomy courses. The General Education Astronomy
Source (GEAS) online tutor guides students developing mastery of core
astronomical concepts and mathematical applications of general astronomy
material. It contains over 12,000 questions, with linked hints and solutions.
Students who master the material quickly can advance through the topics, while
under-prepared or hesitant students can focus on questions on a certain topic
for as long as needed, with minimal repetition. Students receive individual
accounts for study and course instructors are provided with overview tracking
information, by time and by topic, for entire cohorts of students. Diagnostic
tools support self-evaluation and close collaboration between instructor and
student, even for distance learners. An initial usage study shows clear trends
in performance which increase with study time, and indicates that distance
learners using these materials perform as well as or better than a comparison
cohort of on-campus astronomy students. We are actively seeking new
collaborators to use this resource in astronomy courses and other educational
venues.Comment: 15 pages, 9 figures; Vogt, N. P., and A. S. Muise. 2015. An online
tutor for general astronomy: The GEAS self-review library. Cogent Education,
2 (1
Modelling human teaching tactics and strategies for tutoring systems
One of the promises of ITSs and ILEs is that they will teach and assist learning in an intelligent manner. Historically this has tended to mean concentrating on the interface, on the representation of the domain and on the representation of the student’s knowledge. So systems have attempted to provide students with reifications both of what is to be learned and of the learning process, as well as optimally sequencing and adjusting activities, problems and feedback to best help them learn that domain. We now have embodied (and disembodied) teaching agents and computer-based peers, and the field demonstrates a much greater interest in metacognition and in collaborative activities and tools to support that collaboration. Nevertheless the issue of the teaching competence of ITSs and ILEs is still important, as well as the more specific question as to whether systems can and should mimic human teachers. Indeed increasing interest in embodied agents has thrown the spotlight back on how such agents should behave with respect to learners. In the mid 1980s Ohlsson and others offered critiques of ITSs and ILEs in terms of the limited range and adaptability of their teaching actions as compared to the wealth of tactics and strategies employed by human expert teachers. So are we in any better position in modelling teaching than we were in the 80s? Are these criticisms still as valid today as they were then? This paper reviews progress in understanding certain aspects of human expert teaching and in developing tutoring systems that implement those human teaching strategies and tactics. It concentrates particularly on how systems have dealt with student answers and how they have dealt with motivational issues, referring particularly to work carried out at Sussex: for example, on responding effectively to the student’s motivational state, on contingent and Vygotskian inspired teaching strategies and on the plausibility problem. This latter is concerned with whether tactics that are effectively applied by human teachers can be as effective when embodied in machine teachers
Adaptive formative assessment system based on computerized adaptive testing and the learning memory cycle for personalized learning
Computerized adaptive testing (CAT) can effectively facilitate student assessment by dynamically selecting questions on the basis of learner knowledge and item difficulty. However, most CAT models are designed for one-time evaluation rather than improving learning through formative assessment. Since students cannot remember everything, encouraging them to repeatedly evaluate their knowledge state and identify their weaknesses is critical when developing an adaptive formative assessment system in real educational contexts. This study aims to achieve this goal by proposing an adaptive formative assessment system based on CAT and the learning memory cycle to enable the repeated evaluation of students' knowledge. The CAT model measures student knowledge and item difficulty, and the learning memory cycle component of the system accounts for students’ retention of information learned from each item. The proposed system was compared with an adaptive assessment system based on CAT only and a traditional nonadaptive assessment system. A 7-week experiment was conducted among students in a university programming course. The experimental results indicated that the students who used the proposed assessment system outperformed the students who used the other two systems in terms of learning performance and engagement in practice tests and reading materials. The present study provides insights for researchers who wish to develop formative assessment systems that can adaptively generate practice tests
Overcoming foreign language anxiety in an emotionally intelligent tutoring system
Learning a foreign language entails cognitive and emotional obstacles. It involves complicated mental processes that affect learning and emotions. Positive emotions such as motivation, encouragement, and satisfaction increase learning achievement, while negative emotions like anxiety, frustration, and confusion may reduce performance. Foreign Language Anxiety (FLA) is a specific type of anxiety accompanying learning a foreign language. It is considered a main impediment that hinders learning, reduces achievements, and diminishes interest in learning.
Detecting FLA is the first step toward reducing and eventually overcoming it. Previously, researchers have been detecting FLA using physical measurements and self-reports. Using physical measures is direct and less regulated by the learner, but it is uncomfortable and requires the learner to be in the lab. Employing self-reports is scalable because it is easy to administer in the lab and online. However, it interrupts the learning flow, and people sometimes respond inaccurately. Using sensor-free human behavioral metrics is a scalable and practical measurement because it is feasible online or in class with minimum adjustments.
To overcome FLA, researchers have studied the use of robots, games, or intelligent tutoring systems (ITS). Within these technologies, they applied soothing music, difficulty reduction, or storytelling. These methods lessened FLA but had limitations such as distracting the learner, not improving performance, and producing cognitive overload. Using an animated agent that provides motivational supportive feedback could reduce FLA and increase learning.
It is necessary to measure FLA effectively with minimal interruption and then successfully reduce it. In the context of an e-learning system, I investigated ways to detect FLA using sensor-free human behavioral metrics. This scalable and practical method allows us to recognize FLA without being obtrusive. To reduce FLA, I studied applying emotionally adaptive feedback that offers motivational supportive feedback by an animated agent
Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness
This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able notice when something is amiss, assess the anomaly, and guide a solution into place. We call this basic strategy of self-guided learning the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. In this essay, we (a) argue that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) detail the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describe specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outline both short-term and long-term research agendas
Applying science of learning in education: Infusing psychological science into the curriculum
The field of specialization known as the science of learning is not, in fact, one field. Science of learning is a term that serves as an umbrella for many lines of research, theory, and application. A term with an even wider reach is Learning Sciences (Sawyer, 2006). The present book represents a sliver, albeit a substantial one, of the scholarship on the science of learning and its application in educational settings (Science of Instruction, Mayer 2011). Although much, but not all, of what is presented in this book is focused on learning in college and university settings, teachers of all academic levels may find the recommendations made by chapter authors of service. The overarching theme of this book is on the interplay between the science of learning, the science of instruction, and the science of assessment (Mayer, 2011). The science of learning is a systematic and empirical approach to understanding how people learn. More formally, Mayer (2011) defined the science of learning as the “scientific study of how people learn” (p. 3). The science of instruction (Mayer 2011), informed in part by the science of learning, is also on display throughout the book. Mayer defined the science of instruction as the “scientific study of how to help people learn” (p. 3). Finally, the assessment of student learning (e.g., learning, remembering, transferring knowledge) during and after instruction helps us determine the effectiveness of our instructional methods. Mayer defined the science of assessment as the “scientific study of how to determine what people know” (p.3). Most of the research and applications presented in this book are completed within a science of learning framework. Researchers first conducted research to understand how people learn in certain controlled contexts (i.e., in the laboratory) and then they, or others, began to consider how these understandings could be applied in educational settings. Work on the cognitive load theory of learning, which is discussed in depth in several chapters of this book (e.g., Chew; Lee and Kalyuga; Mayer; Renkl), provides an excellent example that documents how science of learning has led to valuable work on the science of instruction. Most of the work described in this book is based on theory and research in cognitive psychology. We might have selected other topics (and, thus, other authors) that have their research base in behavior analysis, computational modeling and computer science, neuroscience, etc. We made the selections we did because the work of our authors ties together nicely and seemed to us to have direct applicability in academic settings
How to Design Learning Applications that Support Learners in their Moment of Need – Didactic Requirements of Micro Learning
The COVID-19 pandemic is showing the limits of our traditional education systems that mainly build on classroom lectures with face-to-face interaction between teachers or trainers and learners. Now more than ever, there is a growing need for digital learning formats that make it possible to maintain teaching in universities, schools, and enterprises despite the spatial distance from the learners. To address these new conditions of learning, short and small learning units are a promising approach when it comes to demand-oriented learning solutions. However, the question of how to design didactically appropriate micro content is not yet answered by research. To close this research gap, we conducted a qualitative interview study with professionals in the field of instructional design and technology-enhanced learning-design. With this information, we were able to derive 20 requirements for designing effective micro content
Recommended from our members
I’ve (Urn)ed This: An Application and Criterion-based Evaluation of the Urnings Algorithm
There is increased interest in personalized learning and making e-learning environments more adaptable. Some e-learning systems may use an Item Response Theory (IRT)-based assessment system. An important distinction between assessment and learning contexts is that learner proficiency is expected to remain constant across an assessment, while it is expected to change over time in a learning context. Constant learner proficiency during an assessment enables conventional approaches to estimating person and item parameters using IRT. These IRT-based systems could be abandoned for alternative approaches to modeling learners and system learning content, but assessments may provide more functions than adapting learning material to students. Thus, there is the question, how can e-learning systems with IRT-based assessment components more dynamically adapt their learning content? Is there a solution that leverages IRT for adapting the learning content of the system? A promising solution is the Urnings algorithm. Like other candidate algorithms, it is computationally light, but this algorithm has mechanisms for preventing variance inflation and is suitable for e-learning contexts. It also provides a measure of uncertainty around estimates. It has been studied both through simulations and applications to e-learning systems. Results are promising; however, there has not been an application of the Urnings algorithm to an e-learning context where there are conventionally estimated person parameters to compare the algorithm estimates to. This study addresses this gap by applying the Urnings algorithm to a K–8 reading and mathematics learning platform. In data from this platform, we have person parameter estimates across academic years from an in-system diagnostic assessment. Results from this study will help industry researchers understand the feasibility of the Urnings algorithm for large e-learning systems with IRT-based assessment components
- …