3,929 research outputs found

    Investigating the Essential of Meaningful Automated Formative Feedback for Programming Assignments

    Full text link
    This study investigated the essential of meaningful automated feedback for programming assignments. Three different types of feedback were tested, including (a) What's wrong - what test cases were testing and which failed, (b) Gap - comparisons between expected and actual outputs, and (c) Hint - hints on how to fix problems if test cases failed. 46 students taking a CS2 participated in this study. They were divided into three groups, and the feedback configurations for each group were different: (1) Group One - What's wrong, (2) Group Two - What's wrong + Gap, (3) Group Three - What's wrong + Gap + Hint. This study found that simply knowing what failed did not help students sufficiently, and might stimulate system gaming behavior. Hints were not found to be impactful on student performance or their usage of automated feedback. Based on the findings, this study provides practical guidance on the design of automated feedback

    Automated Feedback for Learning Code Refactoring

    Get PDF

    Adaptive Scaffolding in Block-Based Programming via Synthesizing New Tasks as Pop Quizzes

    Full text link
    Block-based programming environments are increasingly used to introduce computing concepts to beginners. However, novice students often struggle in these environments, given the conceptual and open-ended nature of programming tasks. To effectively support a student struggling to solve a given task, it is important to provide adaptive scaffolding that guides the student towards a solution. We introduce a scaffolding framework based on pop quizzes presented as multi-choice programming tasks. To automatically generate these pop quizzes, we propose a novel algorithm, PQuizSyn. More formally, given a reference task with a solution code and the student's current attempt, PQuizSyn synthesizes new tasks for pop quizzes with the following features: (a) Adaptive (i.e., individualized to the student's current attempt), (b) Comprehensible (i.e., easy to comprehend and solve), and (c) Concealing (i.e., do not reveal the solution code). Our algorithm synthesizes these tasks using techniques based on symbolic reasoning and graph-based code representations. We show that our algorithm can generate hundreds of pop quizzes for different student attempts on reference tasks from Hour of Code: Maze Challenge and Karel. We assess the quality of these pop quizzes through expert ratings using an evaluation rubric. Further, we have built an online platform for practicing block-based programming tasks empowered via pop quiz based feedback, and report results from an initial user study.Comment: Preprint. Accepted as a paper at the AIED'22 conferenc

    A serious game for teaching Java cybersecurity in the industry with an intelligent coach

    Get PDF
    Cybersecurity as been gaining more and more attention over the past years. Nowadays we continue to see a rise in the number of known vulnerabilities and successful cyber-attacks. Several studies show that one of the causes of these problems is the lack of awareness of software developers. If software developers are not aware of how to write secure code they can unknowingly add vulnerabilities to software. This research focuses on raising Java developers cybersecurity awareness by employing a serious game type of approach. Our artifact, the Java Cybersecurity Challenges, consist of programming exercises that intend to give software developers hands-on experience with security related vulnerabilities in the Java programming language. Our designed solution includes an intelligent coach that aims at helping players understand the vulnerabilities and solve the challenges. The present research was conducted using the Action Design Research methodology. This methodology allowed us to reach a useful solution, to the encountered problem, by applying an iterative development approach. Our results show that the developed final artifact is a good method to answer the defined problem and has been accepted and incorporated in an industry training program. This work contributes to researchers and practitioners through a detailed description on the implementation of an automatic code analysis and feedback process to evaluate the security level of the Java Cybersecurity Challenges.A cibersegurança tem vindo a ganhar mais importância nos últimos anos. Hoje em dia, continuamos a ver um aumento no número de vulnerabilidades conhecidas e ataques cibernéticos bem-sucedidos. Vários estudos mostram que uma das causas desses problemas é a falta de consciência dos programadores de software em termos de segurança. Ao não estarem cientes de como escrever código seguro, os programadores podem adicionar vulnerabilidades ao software sem saber. Este estudo foca-se em aumentar a conciencia dos programadores de software de Java, no que toca à segurança cibernética, através de uma abordagem baseada em jogos sérios. O nosso artefacto Java Cybersecurity Challenges, consiste em exercícios de programação que pretendem providenciar aos programadores de software com uma experiência prática sobre vulnerabilidades relacionadas à segurança da linguagem de programação Java. A solução desenvolvida inclui um treinador inteligente que visa ajudar os jogadores a compreender as vulnerabilidades e a resolver os exercícios. Esta pesquisa foi desenvolvida com base na metodologia Action Design Research. Esta metodologia permitiu-nos chegar a uma solução útil, para o problema encontrado, aplicando uma abordagem de desenvolvimento iterativa. Os nossos resultados mostram que o artefacto desenvolvido é um bom método para responder ao problema definido e foi aceite e incorporado num programa de treino da indústria. Este trabalho contribui para investigadores e praticantes através de uma descrição detalhada sobre a implementação de um processo de análise automática de código, bem como de feedback, para avaliar o nível de segurança dos Java Cybersecurity Challenges

    Interactive correction and recommendation for computer language learning and training

    Get PDF
    Active learning and training is a particularly effective form of education. In various domains, skills are equally important to knowledge. We present an automated learning and skills training system for a database programming environment that promotes procedural knowledge acquisition and skills training. The system provides meaningful, knowledge-level feedback such as correction of student solutions and personalised guidance through recommendations. Specifically, we address automated synchronous feedback and recommendations based on personalised performance assessment. At the core of the tutoring system is a pattern-based error classification and correction component that analyses student input in order to provide immediate feedback and in order to diagnose student weaknesses and suggest further study material. A syntax-driven approach based on grammars and syntax trees provides the solution for a semantic analysis technique. Syntax tree abstractions and comparison techniques based on equivalence rules and pattern matching are specific approaches
    corecore