7,268 research outputs found
Applying science of learning in education: Infusing psychological science into the curriculum
The field of specialization known as the science of learning is not, in fact, one field. Science of learning is a term that serves as an umbrella for many lines of research, theory, and application. A term with an even wider reach is Learning Sciences (Sawyer, 2006). The present book represents a sliver, albeit a substantial one, of the scholarship on the science of learning and its application in educational settings (Science of Instruction, Mayer 2011). Although much, but not all, of what is presented in this book is focused on learning in college and university settings, teachers of all academic levels may find the recommendations made by chapter authors of service. The overarching theme of this book is on the interplay between the science of learning, the science of instruction, and the science of assessment (Mayer, 2011). The science of learning is a systematic and empirical approach to understanding how people learn. More formally, Mayer (2011) defined the science of learning as the “scientific study of how people learn” (p. 3). The science of instruction (Mayer 2011), informed in part by the science of learning, is also on display throughout the book. Mayer defined the science of instruction as the “scientific study of how to help people learn” (p. 3). Finally, the assessment of student learning (e.g., learning, remembering, transferring knowledge) during and after instruction helps us determine the effectiveness of our instructional methods. Mayer defined the science of assessment as the “scientific study of how to determine what people know” (p.3). Most of the research and applications presented in this book are completed within a science of learning framework. Researchers first conducted research to understand how people learn in certain controlled contexts (i.e., in the laboratory) and then they, or others, began to consider how these understandings could be applied in educational settings. Work on the cognitive load theory of learning, which is discussed in depth in several chapters of this book (e.g., Chew; Lee and Kalyuga; Mayer; Renkl), provides an excellent example that documents how science of learning has led to valuable work on the science of instruction. Most of the work described in this book is based on theory and research in cognitive psychology. We might have selected other topics (and, thus, other authors) that have their research base in behavior analysis, computational modeling and computer science, neuroscience, etc. We made the selections we did because the work of our authors ties together nicely and seemed to us to have direct applicability in academic settings
The student-produced electronic portfolio in craft education
The authors studied primary school students’ experiences of using an electronic portfolio in their craft education over four years. A stimulated recall interview was applied to collect user experiences and qualitative content analysis to analyse the collected data. The results indicate that the electronic portfolio was experienced as a multipurpose tool to support learning. It makes the learning process visible and in that way helps focus on and improves the quality of learning. © ISLS.Peer reviewe
Recommended from our members
Improving School Improvement
PREFACEIn opening this volume, you might be thinking:Is another book on school improvement really needed?Clearly our answer is yes. Our analyses of prevailing school improvement legislation, planning, and literature indicates fundamental deficiencies, especially with respect to enhancing equity of opportunity and closing the achievement gap.Here is what our work uniquely brings to policy and planning tables:(1) An expanded framework for school improvement – We highlight that moving from a two- to a three-component policy and practice framework is essential for closing the opportunity and achievement gaps. (That is, expanding from focusing primarily on instruction and management/government concerns by establishing a third primary component to improve how schools address barriers to learning and teaching.)(2) An emphasis on integrating a deep understanding of motivation – We underscore that concerns about engagement, management of behavior, school climate, equity of opportunity, and student outcomes require an up-to-date grasp of motivation and especially intrinsic motivation.(3) Clarification of the nature and scope of personalized teaching – We define personalization as the process of matching learner motivation and capabilities and stress that it is the learner's perception that determines whether the match is a good one.(4) A reframing of remediation and special education – We formulate these processes as personalized special assistance that is applied in and out of classrooms and practiced in a sequential and hierarchical manner.(5) A prototype for transforming student and learning supports – We provide a framework for a unified, comprehensive, and equitable system designed to address barriers to learning and teaching and re-engage disconnected students and families.(6) A reworking of the leadership structure for whole school improvement --We outline how the operational infrastructure can and must be realigned in keeping with a three component school improvement framework.(7) A systemic approach to enhancing school-community collaboration – We delineate a leadership role for schools in outreaching to communities in order to work on shared concerns through a formal collaborative operational infrastructure that enables weaving together resources to advance the work.(8) An expanded framework for school accountability – We reframe school accountability to ensure a balanced approach that accounts for a shift to a three component school improvement policy.(9) Guidance for substantive, scalable, and sustainable systemic changes –We frame mechanisms and discuss lessons learned related to facilitating fundamental systemic changes and replicating and sustaining them across a district.The frameworks and practices presented are based on our many years of work in schools and from efforts to enhance school-community collaboration. We incorporate insights from various theories and the large body of relevant research and from lessons learned and shared by many school leaders and staff who strive everyday to do their best for children.Our emphasis on new directions in no way is meant to demean current efforts. We know that the demands placed on those working in schools go well beyond what anyone should be asked to do. Given the current working conditions in many schools, our intent is to help make the hard work generate better results. To this end, we highlight new directions and systemic pathways for improving school outcomes.Some of what we propose is difficult to accomplish. Hopefully, the fact that there are schools, districts, and state agencies already trailblazing the way will engender a sense of hope and encouragement to those committed to innovation.It will be obvious that our work owes much to many. We are especially grateful to those who are pioneering major systemic changes across the country. These leaders and so many in the field have generously offered their insights and wisdom. And, of course, we are indebted to hundreds of scholars whose research and writing is a shared treasure. As always, we take this opportunity to thank Perry Nelson and the host of graduate and undergraduate students at UCLA who contribute so much to our work each day, and to the many young people and their families who continue to teach us all.Respectfully submitted for your consideration,Howard Adelman & Linda Taylo
Classroom Assessment and Educational Measurement
Classroom Assessment and Educational Measurement explores the ways in which the theory and practice of both educational measurement and the assessment of student learning in classroom settings mutually inform one another. Chapters by assessment and measurement experts consider the nature of classroom assessment information, from student achievement to affective and socio-emotional attributes; how teachers interpret and work with assessment results; and emerging issues in assessment such as digital technologies and diversity/inclusion. This book uniquely considers the limitations of applying large-scale educational measurement theory to classroom assessment and the adaptations necessary to make this transfer useful. Researchers, graduate students, industry professionals, and policymakers will come away with an essential understanding of how the classroom assessment context is essential to broadening contemporary educational measurement perspectives
Layered evaluation of interactive adaptive systems : framework and formative methods
Peer reviewedPostprin
A generic architecture for interactive intelligent tutoring systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 07/06/2001.This research is focused on developing a generic intelligent architecture for an interactive tutoring system. A review of the literature in the areas of instructional theories, cognitive and social views of learning, intelligent tutoring systems development methodologies, and knowledge representation methods was conducted. As a result, a generic ITS development architecture (GeNisa) has been proposed, which combines the features of knowledge base systems (KBS) with object-oriented methodology. The GeNisa architecture consists of the following components: a tutorial events communication module, which encapsulates the interactive processes and other independent computations between different components; a software design toolkit; and an autonomous knowledge acquisition from a probabilistic knowledge base. A graphical application development environment includes tools to support application development, and learning environments and which use a case scenario as a basis for instruction. The generic architecture is designed to support client-side execution in a Web browser environment, and further testing will show that it can disseminate applications over the World Wide Web. Such an architecture can be adapted to different teaching styles and domains, and reusing instructional materials automatically can reduce the effort of the courseware developer (hence cost and time) in authoring new materials. GeNisa was implemented using Java scripts, and subsequently evaluated at various commercial and academic organisations. Parameters chosen for the evaluation include quality of courseware, relevancy of case scenarios, portability to other platforms, ease of use, content, user-friendliness, screen display, clarity, topic interest, and overall satisfaction with GeNisa. In general, the evaluation focused on the novel characteristics and performances of the GeNisa architecture in comparison with other ITS and the results obtained are discussed and analysed.
On the basis of the experience gained during the literature research and GeNisa development and evaluation. a generic methodology for ITS development is proposed as well as the requirements for the further development of ITS tools. Finally, conclusions are drawn and areas for further research are identified
INVESTIGATING THE IMPACT OF ONLINE HUMAN COLLABORATION IN EXPLANATION OF AI SYSTEMS
An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (XAI). XAI aims to improve human understanding and trust in machine intelligence and automation by providing users with visualizations and other information explaining the AI’s decisions, actions, or plans and thereby to establish justified trust and reliance. XAI systems have primarily used algorithmic approaches designed to generate explanations automatically that help understanding underlying information about decisions and establish justified trust and reliance, but an alternate that may augment these systems is to take advantage of the fact that user understanding of AI systems often develops through self-explanation (Mueller et al., 2021). Users attempt to piece together different sources of information and develop a clearer understanding, but these self-explanations are often lost if not shared with others. This thesis research demonstrated how this ‘Self-Explanation’ could be shared collaboratively via a system that is called collaborative XAI (CXAI). It is akin to a Social Q&A platform (Oh, 2018) such as StackExchange. A web-based system was built and evaluated formatively and via user studies. Formative evaluation will show how explanations in an XAI system, especially collaborative explanations, can be assessed based on ‘goodness criteria’ (Mueller et al., 2019). This thesis also investigated how the users performed with the explanations from this type of XAI system. Lastly, the research investigated whether the users of CXAI system are satisfied with the human-generated explanations generated in the system and check if the users can trust this type of explanation
- …