19 research outputs found

    Gamifying a Software Testing Course with Continuous Integration

    Full text link
    Testing plays a crucial role in software development, and it is essential for software engineering students to receive proper testing education. However, motivating students to write tests and use automated testing during software development can be challenging. To address this issue and enhance student engagement in testing when they write code, we propose to incentivize students to test more by gamifying continuous integration. For this we use Gamekins, a tool that is seamlessly integrated into the Jenkins continuous integration platform and uses game elements based on commits to the source code repository: Developers can earn points by completing test challenges and quests generated by Gamekins, compete with other developers or teams on a leaderboard, and receive achievements for their test-related accomplishments. In this paper, we present our integration of Gamekins into an undergraduate-level course on software testing. We observe a correlation between how students test their code and their use of Gamekins, as well as a significant improvement in the accuracy of their results compared to a previous iteration of the course without gamification. As a further indicator of how this approach improves testing behavior, the students reported enjoyment in writing tests with Gamekins

    A Survey on What Developers Think About Testing

    Full text link
    Software is infamous for its poor quality and frequent occurrence of bugs. While there is no doubt that thorough testing is an appropriate answer to ensure sufficient quality, the poor state of software generally suggests that developers may not always engage as thoroughly with testing as they should. This observation aligns with the prevailing belief that developers simply do not like writing tests. In order to determine the truth of this belief, we conducted a comprehensive survey with 21 questions aimed at (1) assessing developers' current engagement with testing and (2) identifying factors influencing their inclination toward testing; that is, whether they would actually like to test more but are inhibited by their work environment, or whether they would really prefer to test even less if given the choice. Drawing on 284 responses from professional software developers, we uncover reasons that positively and negatively impact developers' motivation to test. Notably, reasons for motivation to write more tests encompass not only a general pursuit of software quality but also personal satisfaction. However, developers nevertheless perceive testing as mundane and tend to prioritize other tasks. One approach emerging from the responses to mitigate these negative factors is by providing better recognition for developers' testing efforts

    PlayTest: A Gamified Test Generator for Games

    Full text link
    Games are usually created incrementally, requiring repeated testing of the same scenarios, which is a tedious and error-prone task for game developers. Therefore, we aim to alleviate this game testing process by encapsulating it into a game called Playtest, which transforms the tiring testing process into a competitive game with a purpose. Playtest automates the generation of valuable test cases based on player actions, without the players even realising it. We envision the use of Playtest to crowdsource the task of testing games by giving players access to the respective games through our tool in the playtesting phases during the development process.Comment: 4 pages with 4 figures, to be published in Proceedings of the 2nd International Workshop on Gamification in Software Development, Verification, and Validation 202

    Code Critters: A Block-Based Testing Game

    Full text link
    Learning to program has become common in schools, higher education and individual learning. Although testing is an important aspect of programming, it is often neglected in education due to a perceived lack of time and knowledge, or simply because testing is considered less important or fun. To make testing more engaging, we therefore introduce Code Critters, a Tower Defense game based on testing concepts: The aim of the game is to place magic mines along the route taken by small "critters" from their home to a tower, such that the mines distinguish between critters executing correct code from those executing buggy code. Code is shown and edited using a block-based language to make the game accessible for younger learners. The mines encode test inputs as well as test oracles, thus making testing an integral and fun component of the game

    Construction system for multi story temporary accommodation building at timber for Sochi 2014

    No full text
    Abweichender Titel laut Übersetzung der Verfasserin/des VerfassersEin Bau für die Olympischen Winterspiel 2014 in Sochi. Für den Bereich Ski Alpin Damen und Herren unter den Richtlinien des IOC (Internationales Olympisches Komitee) es ist auch an eine Nutzung für die Winter Paralympics gedacht worden. (barrierefrei). Die ganze Konstruktion wird in Holz ausgeführt, dadurch kann ein schneller Auf- und Abbau gewährleistet werden.(temporär)13

    An IDE Plugin for Gamified Continuous Integration

    Full text link
    Interruptions and context switches resulting from meetings, urgent tasks, emails, and queries from colleagues contribute to productivity losses in developers' daily routines. This is particularly challenging for tasks like software testing, which are already perceived as less enjoyable, prompting developers to seek distractions. To mitigate this, applying gamification to testing activities can enhance motivation for test writing. One such gamification tool is Gamekins, which integrates challenges, quests, achievements, and leaderboards into the Jenkins CI (continuous integration) platform. However, as Gamekins is typically accessed through a browser, it introduces a context switch. This paper presents an IntelliJ plugin designed to seamlessly integrate Gamekins' gamification elements into the IDE, aiming to minimize context switches and boost developer motivation for test writing

    Replication package for paper "Gamifying a Software Testing Course with Continuous Integration"

    No full text
    Source code of the coverage-guided fuzzer (folder coverage-guided-fuzzing) Source code of the line-coverage analyzer (folder line-coverage-analyser) The scripts to analyze the data from the use of Blinded (folder Evaluation) The examples used for the output of the Analyzer (folder examples) The data/statistics from the use of Blinded for both projects (coverage.txt and fuzzer.txt) The meta information of the Analyzer 2019, Analyzer 2022 and Fuzzer for comparison (generalInformation_three.csv) The meta information of the Analyzer 2022 and Fuzzer for comparison (generalInformation_both.csv) The meta information of the Analyzer 2019 and Analyzer 2022 for comparison (generalInformation_lca.csv) The script for comparison between the Analyzer 2019, Analyzer 2022 and Fuzzer (evaluation_three.R) The script for comparison between the Analyzer 2022 and Fuzzer (evaluation.R) The script for comparison between the Analyzer 2019 and Analyzer 2022 (comparison.R) The survey questions (survey.pdf) The survey answers (survey_testing.csv) </p

    IntelliGame in Action: An Experience Report on Gamifying JavaScript Unit Tests

    Full text link
    This paper investigates the integration and assessment of IntelliGame, a gamification plugin initially designed for Java development, within the realm of JavaScript unit testing. We aim to verify the generalizability of IntelliGame to JavaScript development and to provide valuable insights into the experiment's design. For this, we first customize IntelliGame for JavaScript, and then conduct a controlled experiment involving 152 participants utilizing the Jest testing framework, and finally examine its influence on testing behavior and the overall developer experience. The findings from this study provide valuable insights for improving JavaScript testing methodologies through the incorporation of gamification

    Replaction package for paper "An Empirical Evaluation of Manually Created Equivalent Mutants"

    No full text
    # Experiment Setup, Data, and Results for the Paper 'Empirical Evaluation of Manually Created Mutants'The experiment and analysis consist of two big steps1. Extracting data (cuts, mutants, tests) from dataset executing available Test-vs-Mutant and executing equivalence detection2. Analysing the generated dataThe first part is done via the `[0-9]{2}_` scripts in this directory.Those generate the '*.csv' files below the `results/data/` directory.The scripts require a basic linux bash environment (including commands like `tar`, `sed`, and `zcat`) and `podman` for importing/extracting from the SQL dumps and test-vs-mutant execution.The `config.yml` defines things like input and output directories and which datasets, CUTs, and games to include in the data extraction and execution.The input datasets can be found in the `raw-data` directory and, for each dataset, consist of the compressed SQL dump (`.sql.gz`) of the Code Defenders database and the archive (`.tar.gz`) from the Code Defenders data directory.The second part is done via the `experiment.ipynb` Jupyter Notebook.For this the `environment.yml` can be used for `conda` to install the necessary packages.## Directory StructureREADME.md # This fileenvironment.yml # Conda environment definitionexperiment.ipynb # Jupyter Notebook for analysing resultsconfig.yml # Configurations for experiment (import/execution)[0-9]{2}_ # Scripts to execute different parts of the experimentdependencies/ # Bunch of dependencies proguard/ # Executable and config of the used optimizer for TCE+: ProGuard test/ # Executable and dependencies for execution of the Code Defenders testsraw-data/ # Contains data directory and database archive / data.tar.gz database.sql.gz data/ # Extracted 'data.tar.gz' from '01_datasets-extract-data-dirs' scriptresults/ files/ # The extracted/analyzed CUTs, Mutants, and Tests / codeundertest/ / mutants/ / tests/ / data/ # The CSV data from the analysis scripts [0-9]{2}_/ .csv static/ # Additional static data mutants_manual_analysed/ # The results of the manual equivalent mutant analysis/investigationscripts/ # Bunch of scripts</p
    corecore