123 research outputs found

    On the Use of Semantic-Based AIG to Automatically Generate Programming Exercises

    Full text link
    In introductory programming courses, proficiency is typically achieved through substantial practice in the form of relatively small assignments and quizzes. Unfortunately, creating programming assignments and quizzes is both, time-consuming and error-prone. We use Automatic Item Generation (AIG) in order to address the problem of creating numerous programming exercises that can be used for assignments or quizzes in introductory programming courses. AIG is based on the use of test-item templates with embedded variables and formulas which are resolved by a computer program with actual values to generate test-items. Thus, hundreds or even thousands of test-items can be generated with a single test-item template. We present a semantic-based AIG that uses linked open data (LOD) and automatically generates contextual programming exercises. The approach was incorporated into an existing self-assessment and practice tool for students learning computer programming. The tool has been used in different introductory programming courses to generate a set of practice exercises different for each student, but with the same difficulty and quality

    Automatic question generation about introductory programming code

    Get PDF
    Many students who learn to program end up writing code they do not understand. Most of the available code evaluation systems evaluate the submitted solution functionally and not the knowledge of the person who submitted it. This dissertation proposes a system that generates questions about the code submitted by the student, analyses their answers and returns the correct answers. In this way, students reflect about the code they have written and the teachers of the programming courses can better pinpoint their difficulties. We carried out an experiment with undergraduate and master's students in Computer Science degrees in order to understand their difficulties and test the prototype's robustness. We concluded that most students, although understanding simple details of the code they write, do not understand the behaviour of the program entirely, especially with respect to program state. Improvements to the prototype and how to conduct future experiments are also suggested.Muitos alunos que aprendem a programar acabam por escrever código que não entendem. A maior parte dos sistemas de avaliação de código disponíveis avaliam a solução submetida funcionalmente e não o conhecimento da pessoa que o submeteu. Esta dissertação propõe um sistema que gera questões sobre o código submetido pelo aluno, analisa as suas respostas e devolve as respostas corretas. Desta forma, os alunos refletem sobre o código que escreveram e os professores das unidades curriculares de programação conseguem identificar melhor as suas dificuldades. Conduzimos uma experiência com alunos de licenciatura e mestrado em Engenharia Informática e cursos relacionados de forma a perceber quais as suas dificuldades e testar a robustez do protótipo. Concluímos que a maior parte dos alunos, embora entendam detalhes simples do código que escrevem, não entendem o comportamento do programa na sua totalidade e o estado que este possui num determinado momento. São também sugeridas melhorias ao protótipo e à condução de futuras experiências

    Automatic question generation about introductory programming code

    Get PDF
    Many students who learn to program end up writing code they do not understand. Most of the available code evaluation systems evaluate the submitted solution functionally and not the knowledge of the person who submitted it. This dissertation proposes a system that generates questions about the code submitted by the student, analyses their answers and returns the correct answers. In this way, students reflect about the code they have written and the teachers of the programming courses can better pinpoint their difficulties. We carried out an experiment with undergraduate and master’s students in Computer Science degrees in order to understand their difficulties and test the prototype’s robustness. We concluded that most students, although understanding simple details of the code they write, do not understand the behaviour of the program entirely, especially with respect to program state. Improvements to the prototype and how to conduct future experiments are also suggested.Muitos alunos que aprendem a programar acabam por escrever código que não entendem. A maior parte dos sistemas de avaliação de código disponíveis avaliam a solução submetida funcionalmente e não o conhecimento da pessoa que o submeteu. Esta dissertação propõe um sistema que gera questões sobre o código submetido pelo aluno, analisa as suas respostas e devolve as respostas corretas. Desta forma, os alunos refletem sobre o código que escreveram e os professores das unidades curriculares de programação conseguem identificar melhor as suas dificuldades. Conduzimos uma experiência com alunos de licenciatura e mestrado em Engenharia Informática e cursos relacionados de forma a perceber quais as suas dificuldades e testar a robustez do protótipo. Concluímos que a maior parte dos alunos, embora entendam detalhes simples do código que escrevem, não entendem o comportamento do programa na sua totalidade e o estado que este possui num determinado momento. São também sugeridas melhorias ao protótipo e à condução de futuras experiências

    Robosourcing Educational Resources -- Leveraging Large Language Models for Learnersourcing

    Full text link
    In this article, we introduce and evaluate the concept of robosourcing for creating educational content. Robosourcing lies in the intersection of crowdsourcing and large language models, where instead of a crowd of humans, requests to large language models replace some of the work traditionally performed by the crowd. Robosourcing includes a human-in-the-loop to provide priming (input) as well as to evaluate and potentially adjust the generated artefacts; these evaluations could also be used to improve the large language models. We propose a system to outline the robosourcing process. We further study the feasibility of robosourcing in the context of education by conducting an evaluation of robosourced and programming exercises, generated using OpenAI Codex. Our results suggest that robosourcing could significantly reduce human effort in creating diverse educational content while maintaining quality similar to human-created content

    The potential of Open Data to automatically create learning resources for smart learning environments

    Get PDF
    Producción CientíficaSmart Education requires bridging formal and informal learning experience. However, how to create contextualized learning resources that support this bridging remains a problem. In this paper, we propose to exploit the open data available in the Web to automatically create contextualized learning resources. Our preliminary results are promising, as our system creates thousands of learning resources related to formal education concepts and physical locations in the student’s local municipality. As part of our future work, we will explore how to integrate these resources into a Smart Learning Environment.Ministerio de Ciencia e Innovación - Fondo Europeo de Desarrollo Regional (grant TIN2017-85179-C3-2-R)Junta de Castilla y León - Fondo Europeo de Desarrollo Regional (grant VA257P18

    Let's Ask Students About Their Programs, Automatically

    Get PDF
    Students sometimes produce code that works but that its author does not comprehend. For example, a student may apply a poorly-understood code template, stumble upon a working solution through trial and error, or plagiarize. Similarly, passing an automated functional assessment does not guarantee that the student understands their code. One way to tackle these issues is to probe students' comprehension by asking them questions about their own programs. We propose an approach to automatically generate questions about student-written program code. We moreover propose a use case for such questions in the context of automatic assessment systems: after a student's program passes unit tests, the system poses questions to the student about the code. We suggest that these questions can enhance assessment systems, deepen student learning by acting as self-explanation prompts, and provide a window into students' program comprehension. This discussion paper sets an agenda for future technical development and empirical research on the topic

    An Algorithm for Generating Gap-Fill Multiple Choice Questions of an Expert System

    Get PDF
    This research is aimed to propose an artificial intelligence algorithm comprising an ontology-based design, text mining, and natural language processing for automatically generating gap-fill multiple choice questions (MCQs). The simulation of this research demonstrated an application of the algorithm in generating gap-fill MCQs about software testing. The simulation results revealed that by using 103 online documents as inputs, the algorithm could automatically produce more than 16 thousand valid gap-fill MCQs covering a variety of topics in the software testing domain. Finally, in the discussion section of this paper we suggest how the proposed algorithm should be applied to produce gap-fill MCQs being collected in a question pool used by a knowledge expert system

    Supporting contextualized learning with linked open data

    Get PDF
    Producción CientíficaThis paper proposes a template-based approach to semi-automatically create contextualized learning tasks out of several sources from the Web of Data. The contextualization of learning tasks opens the possibility of bridging formal learning that happens in a classroom, and informal learning that happens in other physical spaces, such as squares or historical buildings. The tasks created cover different cognitive levels and are contextualized by their location and the topics covered. We applied this approach to the domain of History of Art in the Spanish region of Castile and Leon. We gathered data from DBpedia, Wikidata and the Open Data published by the regional government and we applied 32 templates to obtain 16K learning tasks. An evaluation with 8 teachers shows that teachers would accept their students to carry out the tasks generated. Teachers also considered that the 85% of the tasks generated are aligned with the content taught in the classroom and were found to be relevant to learn in other informal spaces. The tasks created are available at https://casuallearn.gsic.uva.es/sparql.Junta de Castilla y León (grant VA257P18)Fondo Europeo de Desarrollo Regional - Agencia Nacional de Investigación (grant TIN2017-85179-C3-2-R

    A FRAMEWORK FOR THE PRE-CALIBRATION OF AUTOMATICALLY GENERATED ITEMS

    Get PDF
    This paper presents a new conceptual framework and corresponding psychometric model designed for the pre-calibration of automatically generated items. This model utilizes a multi-level framework and a combination of crossed fixed and random effects to capture key components of the generative process, and is intended to be broadly applicable across research efforts and contexts. Unique among models proposed within the AIG literature, this model incorporates specific mean and variance parameters to support the direct assessment of the quality of the item generation process. The utility of this framework is demonstrated through an empirical analysis of response data collected from the online administration of automatically generated items intended to assess young students’ mathematics fluency. Limitations in the application of the proposed framework are explored through targeted simulation studies, and future directions for research are discussed
    corecore