9,554 research outputs found

    Responsible research and innovation in science education: insights from evaluating the impact of using digital media and arts-based methods on RRI values

    Get PDF
    The European Commission policy approach of Responsible Research and Innovation (RRI) is gaining momentum in European research planning and development as a strategy to align scientific and technological progress with socially desirable and acceptable ends. One of the RRI agendas is science education, aiming to foster future generations' acquisition of skills and values needed to engage in society responsibly. To this end, it is argued that RRI-based science education can benefit from more interdisciplinary methods such as those based on arts and digital technologies. However, the evidence existing on the impact of science education activities using digital media and arts-based methods on RRI values remains underexplored. This article comparatively reviews previous evidence on the evaluation of these activities, from primary to higher education, to examine whether and how RRI-related learning outcomes are evaluated and how these activities impact on students' learning. Forty academic publications were selected and its content analysed according to five RRI values: creative and critical thinking, engagement, inclusiveness, gender equality and integration of ethical issues. When evaluating the impact of digital and arts-based methods in science education activities, creative and critical thinking, engagement and partly inclusiveness are the RRI values mainly addressed. In contrast, gender equality and ethics integration are neglected. Digital-based methods seem to be more focused on students' questioning and inquiry skills, whereas those using arts often examine imagination, curiosity and autonomy. Differences in the evaluation focus between studies on digital media and those on arts partly explain differences in their impact on RRI values, but also result in non-documented outcomes and undermine their potential. Further developments in interdisciplinary approaches to science education following the RRI policy agenda should reinforce the design of the activities as well as procedural aspects of the evaluation research

    Framing automatic grading techniques for open-ended questionnaires responses. A short survey

    Get PDF
    The assessment of students' performances is one of the essential components of teaching activities, and it poses different challenges to teachers and instructors, especially when considering the grading of responses to open-ended questions (i.e., short-answers or essays). Open-ended tasks allow a more in-depth assessment of students' learning levels, but their evaluation and grading are time-consuming and prone to subjective bias. For these reasons, automatic grading techniques have been studied for a long time, focusing mainly on short-answers rather than long essays. Given the growing popularity of Massive Online Open Courses and the shifting from physical to virtual classrooms environments due to the Covid-19 pandemic, the adoption of questionnaires for evaluating learning performances has rapidly increased. Hence, it is of particular interest to analyze the recent effort of researchers in the development of techniques designed to grade students' responses to open-ended questions. In our work, we consider a systematic literature review focusing on automatic grading of open-ended written assignments. The study encompasses 488 articles published from 1984 to 2021 and aims at understanding the research trends and the techniques to tackle essay automatic grading. Lastly, inferences and recommendations are given for future works in the Learning Analytics field

    Automated generation and correction of diagram-based exercises for Moodle

    Full text link
    One of the most time‐consuming task for teachers is creating and correcting exercises to evaluate students. This is normally performed by hand, which incurs high time costs and is error‐prone. A way to alleviate this problem is to provide an assistant tool that automates such tasks. In the case of exercises based on diagrams, they can be represented as models to enable their automated model‐based generation for any target environment, like web or mobile applications, or learning platforms like MOODLE. In this paper, we propose an automated process for synthesizing five types of diagram‐based exercises for the MOODLE platform. Being model‐based, our solution is domain‐agnostic (i.e., it can be applied to arbitrary domains like automata, electronics, or software design). We report on its use within a university course on automata theory, as well as evaluations of generality, effectiveness and efficiency, illustrating the benefits of our approachComunidad de Madrid, Grant/Award Number: S2018/TCS‐4314; Ministerio de Ciencia e Innovación, Grant/Award Numbers: PID2021‐ 122270OB‐I00, TED2021‐129381B‐C2

    Framing automatic grading techniques for open-ended questionnaires responses. A short survey

    Get PDF
    The assessment of students' performances is one of the essential components of teaching activities, and it poses different challenges to teachers and instructors, especially when considering the grading of responses to open-ended questions (i.e., short-answers or essays). Open-ended tasks allow a more in-depth assessment of students' learning levels, but their evaluation and grading are time-consuming and prone to subjective bias. For these reasons, automatic grading techniques have been studied for a long time, focusing mainly on short-answers rather than long essays. Given the growing popularity of Massive Online Open Courses and the shifting from physical to virtual classrooms environments due to the Covid-19 pandemic, the adoption of questionnaires for evaluating learning performances has rapidly increased. Hence, it is of particular interest to analyze the recent effort of researchers in the development of techniques designed to grade students' responses to open-ended questions. In our work, we consider a systematic literature review focusing on automatic grading of open-ended written assignments. The study encompasses 488 articles published from 1984 to 2021 and aims at understanding the research trends and the techniques to tackle essay automatic grading. Lastly, inferences and recommendations are given for future works in the Learning Analytics field

    Towards grading automation of open questions using machine learning

    Get PDF

    Deep Learning Approach for cognitive competency assessment in Computer Programming subject

    Get PDF
    This research examines the competencies that are essential for an lecturer or instructor to evaluate the student based on automated assessments. The competencies are the skills, knowledge, abilities and behavior that are required to perform the task given, whether in a learning or a working environment. The significance of this research is that it will assist students who are having difficulty learning a Computer Programming Language course to identify their flaws using a Deep Learning Approach. As a result, higher education institutions have a problem with assessing students based on their competency level because; they still use manual assessment to mark the assessment. In order to measure intelligence, it is necessary to identify the cluster of abilities or skills of the type in which intelligence expresses itself. This grouping of skills and abilities referred to as "competency". Then, an automated assessment is a problem-solving activity in which the student and the computer interact with no other human intervention. This review focuses on collecting different techniques that have been used. In addition, the review finding shows the main gap that exists within the context of the studied areas, which contributes to our key research topic of interest

    Supporting mediated peer-evaluation to grade answers to open-ended questions

    Get PDF
    We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher- evaluations, and by the learner models is represented by a Bayesian Network, in which the grades of the answers, and the elements of the learner models, are variables, with values in a probability distribution. The initial state of the network is determined by the peer-assessment data. Then, each teacher’s grading of an answer triggers evidence propagation in the network. The framework is implemented in a web-based system. We present also an experimental activity, set to verify the effectiveness of the approach, in terms of correctness of system grading, amount of required teacher's work, and correlation of system outputs with teacher’s grades and student’s final exam grade
    • …
    corecore