8,461 research outputs found

    Having Fun in Learning Formal Specifications

    Full text link
    There are many benefits in providing formal specifications for our software. However, teaching students to do this is not always easy as courses on formal methods are often experienced as dry by students. This paper presents a game called FormalZ that teachers can use to introduce some variation in their class. Students can have some fun in playing the game and, while doing so, also learn the basics of writing formal specifications in the form of pre- and post-conditions. Unlike existing software engineering themed education games such as Pex and Code Defenders, FormalZ takes the deep gamification approach where playing gets a more central role in order to generate more engagement. This short paper presents our work in progress: the first implementation of FormalZ along with the result of a preliminary users' evaluation. This implementation is functionally complete and tested, but the polishing of its user interface is still future work

    Recent Trends in Software Testing Education: A Systematic Literature Review

    Get PDF
    Testing is a critical aspect of software development. Far too often software is released with critical faults. However, testing is often considered tedious and boring. Unfortunately, many graduates might join the work force without having had any education in software testing, which exacerbates the problem even further. Therefore, teaching software testing as part of a university degree in software engineering and is very important. But it is an open challenge how to teach software testing in an effective way that can successfully motivate students. In this paper, we have carried out a systematic literature review on the topic of teaching software testing. We analysed and reviewed 30 papers that were published between 2013 and 2017. The review points out to a few different trends, like the use of gamification to make the teaching of software testing less tedious

    A Multi-Gene Genetic Programming Application for Predicting Students Failure at School

    Full text link
    Several efforts to predict student failure rate (SFR) at school accurately still remains a core problem area faced by many in the educational sector. The procedure for forecasting SFR are rigid and most often times require data scaling or conversion into binary form such as is the case of the logistic model which may lead to lose of information and effect size attenuation. Also, the high number of factors, incomplete and unbalanced dataset, and black boxing issues as in Artificial Neural Networks and Fuzzy logic systems exposes the need for more efficient tools. Currently the application of Genetic Programming (GP) holds great promises and has produced tremendous positive results in different sectors. In this regard, this study developed GPSFARPS, a software application to provide a robust solution to the prediction of SFR using an evolutionary algorithm known as multi-gene genetic programming. The approach is validated by feeding a testing data set to the evolved GP models. Result obtained from GPSFARPS simulations show its unique ability to evolve a suitable failure rate expression with a fast convergence at 30 generations from a maximum specified generation of 500. The multi-gene system was also able to minimize the evolved model expression and accurately predict student failure rate using a subset of the original expressionComment: 14 pages, 9 figures, Journal paper. arXiv admin note: text overlap with arXiv:1403.0623 by other author

    Mutation testing and self/peer assessment: analyzing their effect on students in a software testing course

    Get PDF
    Testing is a crucial activity in the development of software systems. With the increasing complexity of software projects, the industry requires incorporating graduates with adequate testing skills and preparation in this field. A challenge in software testing education is to make students perceive the benefits of writing tests and assess their quality with advanced testing techniques. In this paper, we present an experience integrating both mutation testing and self/peer assessment –two of the most used techniques to that end in the past– into a software testing course during three years. This experience allowed us to analyze the effect of applying these strategies on the students’ perception of their manually-written test suites. Noticeably, the computation of the mutation score significantly undermined the initial expectations they had on the developed test suites. Also, the application of peer testing helped them estimate the relative quality of two comparable test suites, as we found a notable correspondence with their respective mutation coverage. Besides, a more in-depth analysis revealed that the students' test suites with more test cases did not always achieve the highest scores, that they found more readable their own tests, and that they tended to cover the basic operations while forgetting about more advanced features. An opinion survey confirmed the impact that the use of mutants had on their perception about testing, and they mostly supported paying a higher level of attention to testing concepts in software engineering degree plans.The work was partially funded by the European Commission (FEDER), the Spanish Ministry of Science, Innovation and Universities under the project FAME (RTI2018-093608-B-C33), the European project ASSETs (612678-EPP-1-2019-1-IT-EPPKA2-SSA-B), and the University of Cádiz

    Code Critters: A Block-Based Testing Game

    Full text link
    Learning to program has become common in schools, higher education and individual learning. Although testing is an important aspect of programming, it is often neglected in education due to a perceived lack of time and knowledge, or simply because testing is considered less important or fun. To make testing more engaging, we therefore introduce Code Critters, a Tower Defense game based on testing concepts: The aim of the game is to place magic mines along the route taken by small "critters" from their home to a tower, such that the mines distinguish between critters executing correct code from those executing buggy code. Code is shown and edited using a block-based language to make the game accessible for younger learners. The mines encode test inputs as well as test oracles, thus making testing an integral and fun component of the game

    CODE DEFENDERS: A Mutation Testing Game

    Get PDF
    Mutation testing is endorsed by software testing researchers for its unique capability of providing pragmatic estimates of a test suite's fault detection capability, and for guiding testers in improving their test suites. In practice, however, wide-spread adoption of mutation testing is hampered because any non-trivial program results in huge numbers of mutants, many of which are either trivial or equivalent, and thus useless. Trivial mutants reduce the motivation of developers in trusting and using the technique, while equivalent mutants are frustratingly difficult to handle. These problems are exacerbated by insufficient education on testing, which often means that mutation testing is not well understood in practice. These are examples of the types of problems that gamification aims to overcome by making such tedious activities competitive and entertaining. In this paper, we introduce the first steps towards building Code Defenders, a mutation testing game where players take the role of an attacker, who aims to create the most subtle non-equivalent mutants, or a defender, who aims to create strong tests to kill these mutants. The benefits of such an approach are manifold: The game can serve an educational role by engaging learners in mutation testing activities in a fun way. Experienced players will produce strong test suites, capable of detecting even the most subtle bugs that other players can conceive. Equivalent mutants are handled by making them a special part of the gameplay, where points are at stake in duels between attackers and defenders

    Gamifying a Software Testing Course with Continuous Integration

    Full text link
    Testing plays a crucial role in software development, and it is essential for software engineering students to receive proper testing education. However, motivating students to write tests and use automated testing during software development can be challenging. To address this issue and enhance student engagement in testing when they write code, we propose to incentivize students to test more by gamifying continuous integration. For this we use Gamekins, a tool that is seamlessly integrated into the Jenkins continuous integration platform and uses game elements based on commits to the source code repository: Developers can earn points by completing test challenges and quests generated by Gamekins, compete with other developers or teams on a leaderboard, and receive achievements for their test-related accomplishments. In this paper, we present our integration of Gamekins into an undergraduate-level course on software testing. We observe a correlation between how students test their code and their use of Gamekins, as well as a significant improvement in the accuracy of their results compared to a previous iteration of the course without gamification. As a further indicator of how this approach improves testing behavior, the students reported enjoyment in writing tests with Gamekins

    Lab Package: Mutation Testing

    Get PDF
    KĂ€esoleva bakalaureusetöö eesmĂ€rk on luua Tartu Ülikooli ainele „Tarkvara testimine (MTAT.03.159)“ uued praktikumimaterjalid mutatsioontestimise teemal. Töö annab ĂŒlevaate mutatsioontestimisest ning kirjeldab loodud materjale, mis vĂ”eti kasutusele 2017. aasta kevadsemestril. Tudengite antud tagasiside analĂŒĂŒsi pĂ”hjal on valminud ka parandused edasiseks.The aim of this thesis is to create new lab materials on the topic of mutation testing for the course “Software Testing (MTAT.03.159)” taught at the University of Tartu. The thesis gives an overview of the mutation testing technique, describes materials used in the 2017 spring semester labs, and reports on the students’ evaluation of the lab package. Possible future improvements based on students’ feedback are also presented
    • 

    corecore