7,268 research outputs found

    Applying science of learning in education: Infusing psychological science into the curriculum

    Get PDF
    The field of specialization known as the science of learning is not, in fact, one field. Science of learning is a term that serves as an umbrella for many lines of research, theory, and application. A term with an even wider reach is Learning Sciences (Sawyer, 2006). The present book represents a sliver, albeit a substantial one, of the scholarship on the science of learning and its application in educational settings (Science of Instruction, Mayer 2011). Although much, but not all, of what is presented in this book is focused on learning in college and university settings, teachers of all academic levels may find the recommendations made by chapter authors of service. The overarching theme of this book is on the interplay between the science of learning, the science of instruction, and the science of assessment (Mayer, 2011). The science of learning is a systematic and empirical approach to understanding how people learn. More formally, Mayer (2011) defined the science of learning as the “scientific study of how people learn” (p. 3). The science of instruction (Mayer 2011), informed in part by the science of learning, is also on display throughout the book. Mayer defined the science of instruction as the “scientific study of how to help people learn” (p. 3). Finally, the assessment of student learning (e.g., learning, remembering, transferring knowledge) during and after instruction helps us determine the effectiveness of our instructional methods. Mayer defined the science of assessment as the “scientific study of how to determine what people know” (p.3). Most of the research and applications presented in this book are completed within a science of learning framework. Researchers first conducted research to understand how people learn in certain controlled contexts (i.e., in the laboratory) and then they, or others, began to consider how these understandings could be applied in educational settings. Work on the cognitive load theory of learning, which is discussed in depth in several chapters of this book (e.g., Chew; Lee and Kalyuga; Mayer; Renkl), provides an excellent example that documents how science of learning has led to valuable work on the science of instruction. Most of the work described in this book is based on theory and research in cognitive psychology. We might have selected other topics (and, thus, other authors) that have their research base in behavior analysis, computational modeling and computer science, neuroscience, etc. We made the selections we did because the work of our authors ties together nicely and seemed to us to have direct applicability in academic settings

    The student-produced electronic portfolio in craft education

    Get PDF
    The authors studied primary school students’ experiences of using an electronic portfolio in their craft education over four years. A stimulated recall interview was applied to collect user experiences and qualitative content analysis to analyse the collected data. The results indicate that the electronic portfolio was experienced as a multipurpose tool to support learning. It makes the learning process visible and in that way helps focus on and improves the quality of learning. © ISLS.Peer reviewe

    Classroom Assessment and Educational Measurement

    Get PDF
    Classroom Assessment and Educational Measurement explores the ways in which the theory and practice of both educational measurement and the assessment of student learning in classroom settings mutually inform one another. Chapters by assessment and measurement experts consider the nature of classroom assessment information, from student achievement to affective and socio-emotional attributes; how teachers interpret and work with assessment results; and emerging issues in assessment such as digital technologies and diversity/inclusion. This book uniquely considers the limitations of applying large-scale educational measurement theory to classroom assessment and the adaptations necessary to make this transfer useful. Researchers, graduate students, industry professionals, and policymakers will come away with an essential understanding of how the classroom assessment context is essential to broadening contemporary educational measurement perspectives

    Layered evaluation of interactive adaptive systems : framework and formative methods

    Get PDF
    Peer reviewedPostprin

    A generic architecture for interactive intelligent tutoring systems

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 07/06/2001.This research is focused on developing a generic intelligent architecture for an interactive tutoring system. A review of the literature in the areas of instructional theories, cognitive and social views of learning, intelligent tutoring systems development methodologies, and knowledge representation methods was conducted. As a result, a generic ITS development architecture (GeNisa) has been proposed, which combines the features of knowledge base systems (KBS) with object-oriented methodology. The GeNisa architecture consists of the following components: a tutorial events communication module, which encapsulates the interactive processes and other independent computations between different components; a software design toolkit; and an autonomous knowledge acquisition from a probabilistic knowledge base. A graphical application development environment includes tools to support application development, and learning environments and which use a case scenario as a basis for instruction. The generic architecture is designed to support client-side execution in a Web browser environment, and further testing will show that it can disseminate applications over the World Wide Web. Such an architecture can be adapted to different teaching styles and domains, and reusing instructional materials automatically can reduce the effort of the courseware developer (hence cost and time) in authoring new materials. GeNisa was implemented using Java scripts, and subsequently evaluated at various commercial and academic organisations. Parameters chosen for the evaluation include quality of courseware, relevancy of case scenarios, portability to other platforms, ease of use, content, user-friendliness, screen display, clarity, topic interest, and overall satisfaction with GeNisa. In general, the evaluation focused on the novel characteristics and performances of the GeNisa architecture in comparison with other ITS and the results obtained are discussed and analysed. On the basis of the experience gained during the literature research and GeNisa development and evaluation. a generic methodology for ITS development is proposed as well as the requirements for the further development of ITS tools. Finally, conclusions are drawn and areas for further research are identified

    INVESTIGATING THE IMPACT OF ONLINE HUMAN COLLABORATION IN EXPLANATION OF AI SYSTEMS

    Get PDF
    An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (XAI). XAI aims to improve human understanding and trust in machine intelligence and automation by providing users with visualizations and other information explaining the AI’s decisions, actions, or plans and thereby to establish justified trust and reliance. XAI systems have primarily used algorithmic approaches designed to generate explanations automatically that help understanding underlying information about decisions and establish justified trust and reliance, but an alternate that may augment these systems is to take advantage of the fact that user understanding of AI systems often develops through self-explanation (Mueller et al., 2021). Users attempt to piece together different sources of information and develop a clearer understanding, but these self-explanations are often lost if not shared with others. This thesis research demonstrated how this ‘Self-Explanation’ could be shared collaboratively via a system that is called collaborative XAI (CXAI). It is akin to a Social Q&A platform (Oh, 2018) such as StackExchange. A web-based system was built and evaluated formatively and via user studies. Formative evaluation will show how explanations in an XAI system, especially collaborative explanations, can be assessed based on ‘goodness criteria’ (Mueller et al., 2019). This thesis also investigated how the users performed with the explanations from this type of XAI system. Lastly, the research investigated whether the users of CXAI system are satisfied with the human-generated explanations generated in the system and check if the users can trust this type of explanation
    corecore