5 research outputs found

    How Much Are Machine Assistants Worth? Willingness to Pay for Machine Learning-Based Software Testing

    Get PDF
    Machine Learning (ML) technologies have become the foundation of a plethora of products and services. While the economic potential of such ML-infused solutions has become irrefutable, there is still uncertainty on pricing. Currently, software testing is one area to benefit from ML services assisting in the creation of test cases; a task both complex and demanding human-like outputs. Yet, little is known on the willingness to pay of users, inhibiting the suppliers\u27 incentive to develop suitable tools. To provide insights into desired features and willingness to pay for such ML-based tools, we perform a choice-based conjoint analysis with 119 participants in Germany. Our results show that a high level of accuracy is particularly important for users, followed by ease of use and integration into existing environments. Thus, we not only guide future developers on which attributes to prioritize but also which characteristics of ML-based services are relevant for future research

    Digitization and Lean Customer Experience Management: success factors and conditions, pitfalls and failures

    Get PDF
    In recent years, the use of Lean Customer Experience Management (CEM) by companies and practitioners has been increasingly encountered, but with abstracted and incomprehensible approaches. Lean CEM is hardly rooted in academic literature, providing an excellent opportunity to further investigate the term’s theoretical and practical validity. The “Lean CEM”, principles, best practices, success factors and conditions, pitfalls and failures among Digitization, enhanced by Artificial Intelligence (AI), Lean Management and CEM are discussed. This work is a first step of a design science research, consisting of literature and practice review and provides insights for design propositions for application instructions for a Digitized Lean CEM

    Artificial Intelligence helps making Quality Assurance processes leaner

    No full text
    International audienceLean processes focus on doing only necessery things in an efficient way. Artificial intelligence and Machine Learning offer new opportunities to optimizing processes. The presented approach demonstrates an improvement of the test process by using Machine Learning as a support tool for test management. The scope is the semi-automation of the selection of regression tests. The proposed lean testing process uses Machine Learning as a supporting machine, while keeping the human test manager in charge of the adequate test case selection. 1 Introduction Many established long running projects and programs are execute regression tests during the release tests. The regression tests are the part of the release test to ensure that functionality from past releases still works fine in the new release. In many projects, a significant part of these regression tests are not automated and therefore executed manually. Manual tests are expensive and time intensive [1], which is why often only a relevant subset of all possible regression tests are executed in order to safe time and money. Depending on the software process, different approaches can be used to identify the right set of regression tests. The source code file level is a frequent entry point for this identification [2]. Advanced approaches combine different file level methods [3]. To handle black-box tests, methods like [4] or [5] can be used for test case prioritiza-tion. To decide which tests can be skipped, a relevance ranking of the tests in a regression test suite is needed. Based on the relevance a test is in or out of the regression test set for a specific release. This decision is a task of the test manager supported by experts. The task can be time-consuming in case of big (often a 4-to 5-digit number) regression test suites because the selection is specific to each release. Trends are going to continuous prioritization [6], which this work wants to support with the presented ML based approach for black box regression test case prioritization. Any regression test selection is made upon release specific changes. Changes can be new or deleted code based on refactoring or implementation of new features. But also changes on externals systems which are connected by interfaces have to be considere

    Artificial Intelligence helps making Quality Assurance processes leaner

    No full text
    International audienceLean processes focus on doing only necessery things in an efficient way. Artificial intelligence and Machine Learning offer new opportunities to optimizing processes. The presented approach demonstrates an improvement of the test process by using Machine Learning as a support tool for test management. The scope is the semi-automation of the selection of regression tests. The proposed lean testing process uses Machine Learning as a supporting machine, while keeping the human test manager in charge of the adequate test case selection. 1 Introduction Many established long running projects and programs are execute regression tests during the release tests. The regression tests are the part of the release test to ensure that functionality from past releases still works fine in the new release. In many projects, a significant part of these regression tests are not automated and therefore executed manually. Manual tests are expensive and time intensive [1], which is why often only a relevant subset of all possible regression tests are executed in order to safe time and money. Depending on the software process, different approaches can be used to identify the right set of regression tests. The source code file level is a frequent entry point for this identification [2]. Advanced approaches combine different file level methods [3]. To handle black-box tests, methods like [4] or [5] can be used for test case prioritiza-tion. To decide which tests can be skipped, a relevance ranking of the tests in a regression test suite is needed. Based on the relevance a test is in or out of the regression test set for a specific release. This decision is a task of the test manager supported by experts. The task can be time-consuming in case of big (often a 4-to 5-digit number) regression test suites because the selection is specific to each release. Trends are going to continuous prioritization [6], which this work wants to support with the presented ML based approach for black box regression test case prioritization. Any regression test selection is made upon release specific changes. Changes can be new or deleted code based on refactoring or implementation of new features. But also changes on externals systems which are connected by interfaces have to be considere
    corecore