3,269 research outputs found

    Bringing Light into the Dark - Improving Students’ Black-Box Testing Competencies using Game-Design Elements

    Get PDF
    As software becomes increasingly complex, there is a growing need to enhance quality assurance in software engineering. However, the lack of qualified human resources is a barrier to performing software testing activities in software companies. At the same time, software testing can be considered a tedious task and is often not done at the necessary level of detail, e.g., designing test cases. However, it is crucial for novice programmers and testers to acquire and improve their testing competencies, and to utilize testing techniques, e.g., black-box testing. Teaching software testing is often based on theoretical instructions, resulting in limited practical experience. As a result, students may not develop the necessary testing mindset, highlighting the need for more extensive software testing education. To address this issue, this paper utilizes a design science research approach to implement a gamified learning system that promotes black-box testing competencies with empirical insights from a field test

    On Reducing Undesirable Behavior in Deep Reinforcement Learning Models

    Full text link
    Deep reinforcement learning (DRL) has proven extremely useful in a large variety of application domains. However, even successful DRL-based software can exhibit highly undesirable behavior. This is due to DRL training being based on maximizing a reward function, which typically captures general trends but cannot precisely capture, or rule out, certain behaviors of the system. In this paper, we propose a novel framework aimed at drastically reducing the undesirable behavior of DRL-based software, while maintaining its excellent performance. In addition, our framework can assist in providing engineers with a comprehensible characterization of such undesirable behavior. Under the hood, our approach is based on extracting decision tree classifiers from erroneous state-action pairs, and then integrating these trees into the DRL training loop, penalizing the system whenever it performs an error. We provide a proof-of-concept implementation of our approach, and use it to evaluate the technique on three significant case studies. We find that our approach can extend existing frameworks in a straightforward manner, and incurs only a slight overhead in training time. Further, it incurs only a very slight hit to performance, or even in some cases - improves it, while significantly reducing the frequency of undesirable behavior
    • …
    corecore