974 research outputs found

    Technology to Improve the Assessment of Learning

    Get PDF
    The methodologies of assessment are one of the indicators of quality in teaching, to the point that the idea "tell me how you evaluate and I’ll tell you how you teach" is very accepted. To that we can add, and with what technology? When it is proposed that learning experiences have served to think and reflect on what we learn, learn about ourselves, self-regulate our learning ... Technologies have many functions and possibilities to improve these processes of diagnostic assessment, summative and formative, customize teaching, communicating and reflecting on what has been learned, making feedback more interactive and instantaneous, more motivating activities, easier and faster to manage evaluation data, essential in e-learning, b-learning and m-learning models. At the same time, these digital resources may have problems that we must also attend to. In summary, technologies are essential resources for the evaluation of learning. These are questions from this work: What technologies can we develop a formative assessment with? What emerging technologies are there to assessment?

    A Literature Review on Intelligent Services Applied to Distance Learning

    Get PDF
    Distance learning has assumed a relevant role in the educational scenario. The use of Virtual Learning Environments contributes to obtaining a substantial amount of educational data. In this sense, the analyzed data generate knowledge used by institutions to assist managers and professors in strategic planning and teaching. The discovery of students’ behaviors enables a wide variety of intelligent services for assisting in the learning process. This article presents a literature review in order to identify the intelligent services applied in distance learning. The research covers the period from January 2010 to May 2021. The initial search found 1316 articles, among which 51 were selected for further studies. Considering the selected articles, 33% (17/51) focus on learning systems, 35% (18/51) propose recommendation systems, 26% (13/51) approach predictive systems or models, and 6% (3/51) use assessment tools. This review allowed for the observation that the principal services offered are recommendation systems and learning systems. In these services, the analysis of student profiles stands out to identify patterns of behavior, detect low performance, and identify probabilities of dropouts from courses.info:eu-repo/semantics/publishedVersio

    CAD training for digital product quality: a formative approach with computer‑based adaptable resources for self‑assessment

    Get PDF
    As the engineering and manufacturing sectors transform their processes into those of a digital enterprise, future designers and engineers must be trained to guarantee the quality of the digital models that are created and consumed throughout the product’s lifecycle. Formative training approaches, particularly those based on online rubrics, have been proven highly efective for improving CAD modeling practices and the quality of the corresponding outcomes. However, an efective use of formative rubrics to improve performance must consider two main factors: a proper understanding of the rubric and an accurate selfassessment. In this paper we develop these factors by proposing CAD training based on self-assessment through online formative rubrics enriched with adaptable resources. We analyzed self-assessment data, such as time spent, scoring diferences between trainee and instructor or use of the adaptable resources, of fourteen diferent CAD exams. Results show that resources are more efective when used without any incentives. The comparison of assessments by quality criterion can facilitate the identifcation of issues that may remain unclear to trainees during the learning process. These results can guide the defnition of new strategies for self-training processes and tools, which can contribute to the higher-quality outcomes and CAD practices that are required in model-bases engineering environments

    CAD training for digital product quality: a formative approach with computer-based adaptable resources for self-assessment

    Full text link
    [EN] As the engineering and manufacturing sectors transform their processes into those of a digital enterprise, future designers and engineers must be trained to guarantee the quality of the digital models that are created and consumed throughout the product's lifecycle. Formative training approaches, particularly those based on online rubrics, have been proven highly effective for improving CAD modeling practices and the quality of the corresponding outcomes. However, an effective use of formative rubrics to improve performance must consider two main factors: a proper understanding of the rubric and an accurate self-assessment. In this paper we develop these factors by proposing CAD training based on self-assessment through online formative rubrics enriched with adaptable resources. We analyzed self-assessment data, such as time spent, scoring differences between trainee and instructor or use of the adaptable resources, of fourteen different CAD exams. Results show that resources are more effective when used without any incentives. The comparison of assessments by quality criterion can facilitate the identification of issues that may remain unclear to trainees during the learning process. These results can guide the definition of new strategies for self-training processes and tools, which can contribute to the higher-quality outcomes and CAD practices that are required in model-bases engineering environments.Agost, M.; Company, P.; Contero, M.; Camba, JD. (2022). CAD training for digital product quality: a formative approach with computer-based adaptable resources for self-assessment. International Journal of Technology and Design Education. 32(2):1393-1411. https://doi.org/10.1007/s10798-020-09651-51393141132

    Learning Outcomes Assessment A Practitioner\u27s Handbook

    Get PDF
    Ontario’s colleges and universities have made strides in developing learning outcomes, yet effective assessment remains a challenge. Learning Outcomes Assessment A Practitioner\u27s Handbook is a step-by-step resource to help faculty, staff, academic leaders and educational developers design, review and assess program-level learning outcomes. The handbook explores the theory, principles, reasons for and methods behind developing program-level learning outcomes; emerging developments in assessment; and tips and techniques to build institutional culture, increase faculty involvement and examine curriculum-embedded assessment. It also includes definitions, examples, case studies and recommendations that can be tailored to specific institutional cultures.https://scholar.uwindsor.ca/ctlreports/1005/thumbnail.jp

    Performance Assessment Practice as Professional Learning

    Get PDF
    While performance assessment (PA) is well aligned to project-based learning (PjBL), teachers find it challenging to design and implement PA that is faithful to the authentic context of their projects and viewed externally as rigorous. In contrast to standardizing PA tasks — thereby diminishing authenticity — we formed a research-practice partnership (Coburn, Penuel, & Geil, 2013) that developed and used a “shell” to guide teachers in planning, implementing, and engaging in rigorous dialogues that evaluate and elevate PA practice across four PjBL schools. Drawing from analysis of artifacts and audio-recorded professional development sessions, we highlight how the effort to standardize PA practice while maintaining fidelity to authentic context provided rich opportunities for teacher learning and fostered higher levels of teacher responsibility for assessment

    February 2019

    Get PDF
    New Teaching and Learning Coordinator has been hired! Mr. Steve Ray has joined the ranks of the Center for Excellence in Teaching and Learning, please contact Steve and give him a warm SWOSU welcome

    A comparative analysis of the skilled use of automated feedback tools through the lens of teacher feedback literacy

    Get PDF
    Effective learning depends on effective feedback, which in turn requires a set of skills, dispositions and practices on the part of both students and teachers which have been termed feedback literacy. A previously published teacher feedback literacy competency framework has identified what is needed by teachers to implement feedback well. While this framework refers in broad terms to the potential uses of educational technologies, it does not examine in detail the new possibilities of automated feedback (AF) tools, especially those that are open by offering varying degrees of transparency and control to teachers. Using analytics and artificial intelligence, open AF tools permit automated processing and feedback with a speed, precision and scale that exceeds that of humans. This raises important questions about how human and machine feedback can be combined optimally and what is now required of teachers to use such tools skillfully. The paper addresses two research questions: Which teacher feedback competencies are necessary for the skilled use of open AF tools? and What does the skilled use of open AF tools add to our conceptions of teacher feedback competencies? We conduct an analysis of published evidence concerning teachers’ use of open AF tools through the lens of teacher feedback literacy, which produces summary matrices revealing relative strengths and weaknesses in the literature, and the relevance of the feedback literacy framework. We conclude firstly, that when used effectively, open AF tools exercise a range of teacher feedback competencies. The paper thus offers a detailed account of the nature of teachers’ feedback literacy practices within this context. Secondly, this analysis reveals gaps in the literature, signalling opportunities for future work. Thirdly, we propose several examples of automated feedback literacy, that is, distinctive teacher competencies linked to the skilled use of open AF tools

    Role of LMS Assessment Tools

    Get PDF
    To address the crisis of low retention and graduation rates at two-year colleges, public policy has focused on accountability and evidence of achievement based on outcomes. Further, online learning environments and their tools have caused a major paradigm shift in the policies, practices and learning processes of higher education. However, research on the use of technology for educational purpose and the use of the LMS tools for learning outcomes is still limited. This mixed-methods action research study (MMAR) explored the use of LMS assessment tools among faculty teaching English Composition at a community college. The aim was to increase their use for informed decision-making on student outcomes through faculty-led workshops. Based on the findings, the V-FLC was able to develop a plan for further and sustained use of the tools and make recommendations to the division for wide-scale adoption and use. This study’s findings also contribute to the existing literature addressing faculty needs and the role of grass root leadership in LMS tool use to inform assessment practices
    corecore