15 research outputs found

    Given Enough Eyeballs, all Bugs are Shallow - A Literature Review for the Use of Crowdsourcing in Software Testing

    Get PDF
    Over the last years, the use of crowdsourcing has gained a lot of attention in the domain of software engineering. One key aspect of software development is the testing of software. Literature suggests that crowdsourced software testing (CST) is a reliable and feasible tool for manifold kinds of testing. Research in CST made great strides; however, it is mostly unstructured and not linked to traditional software testing practice and terminology. By conducting a literature review of traditional and crowdsourced software testing literature, this paper delivers two major contributions. First, it synthesizes the fields of crowdsourcing research and traditional software testing. Second, the paper gives a comprehensive overview over findings in CST-research and provides a classification into different software testing types

    Agile Leadership - A Comparison of Agile Leadership Styles

    Get PDF
    Leadership has been the focus of research in the social sciences since the early 1930s. However, no generally valid theory exists to date. In recent years, theories relating to agile leadership have also increasingly emerged. The aim of this paper is to give an overview of the current state of research on agile leadership. For this purpose, a systematic literature analysis is conducted. The different terms used in the context of agile leadership are restricted by means of selection criteria. Furthermore, characteristics of agile leadership will be analyzed and consolidated. This results in a catalogue of criteria with which the selected leadership styles. The evaluation shows that there are overlaps in the styles, which also can be identified in the research

    Conceptualizing the Agile Work Organization: A systematic literature review, framework and research agenda

    Get PDF
    The ongoing discussion of the Agile Work Organization (AO) in research and practice permeates a multitude of research areas. However, no clear conceptualization of the AO has been provided. In this paper, we conduct a Systematic Literature Review to investigate what constitutes and defines the AO. The SLR reveals three dimensions in the research field of the AO: Strategic, Functional and Operative Agility. These dimensions define the AO through different unique capabilities by influencing and enhancing the overall goal of the AO in adaptation and flexibility. Building up on the insights from the review, we develop proposition which describe the interrelationship between the dimensions and towards the AO. Furthermore, implications for academia and practice as well as a research agenda are provided in order to trigger and guide further discussions and research surrounding the AO

    The Imprint of Design Science in Information Systems Research: An Empirical Analysis of the AIS Senior Scholars’ Basket

    Get PDF
    Design Science (DS) has become an established research paradigm in Information Systems (IS) research. However, existing research still considers it as a challenge to publish DS contributions in top IS journals, due to the rather strict guidelines that DS publications are expected to follow. Against this backdrop, we intend to emphasize the myriad of configurations and empirically describe the status-quo of DS publications in IS. Based on a Systematic Literature Review (SLR) and a conceptually derived analysis frame, we empirically analyze DS papers published in the AIS Senior Scholars’ Basket. Thereby, we intend to contribute conceptually and descriptively to the knowledge base of DS, by providing insights based on empirical evidence to aid and guide the discussion towards the advancement of the field. Overall, this shall lay the descriptive foundation for creating prescriptive knowledge on DS in IS by proposing and opening future research avenues

    CAN LAYMEN OUTPERFORM EXPERTS? THE EFFECTS OF USER EXPERTISE AND TASK DESIGN IN CROWDSOURCED SOFTWARE TESTING

    Get PDF
    In recent years, crowdsourcing has increasingly gained attention as a powerful sourcing mechanism for problem-solving in organizations. Depending on the type of activity addressed by crowdsourcing, the complexity of the tasks and the role of the crowdworkers may differ substantially. It is crucial that the tasks are designed and allocated according to the capabilities of the targeted crowds. In this pa-per, we outline our research in progress which is concerned with the effects of task complexity and user expertise on performance in crowdsourced software testing. We conduct an experiment and gath-er empirical data from expert and novice crowds that perform different software testing tasks of vary-ing degrees of complexity. Our expected contribution is twofold. For crowdsourcing in general, we aim at providing valuable insights for the process of framing and allocating tasks to crowds in ways that increase the crowdworkers’ performance. Secondly, we intend to improve the configuration of crowdsourced software testing initiatives. More precisely, the results are expected to show practition-ers what types of testing tasks should be assigned to which group of dedicated crowdworkers. In this vein, we deliver valuable decision support for both crowdsourcers and intermediaries to enhance the performance of their crowdsourcing initiatives

    Towards Successful Crowdsourcing Projects: Evaluating the Implementation of Governance Mechanisms

    Get PDF
    The last decade has witnessed the proliferation of crowdsourcing in various academic domains including strategic management, computer science, or IS research. Numerous companies have drawn on this concept and leveraged the wisdom of crowds for various purposes. However, not all crowdsourcing projects turn out to be a striking success. Hence, research and practice are on the lookout for the main factors influencing the success of crowdsourcing projects. In this context, proper governance is considered as the key to success by several researchers. However, little is known about governance mechanisms and their impact on project outcomes. We address this issue by means of a multiple case analysis in the scope of which we examine crowdsourcing projects on collaboration-based and/or competition-based crowdsourcing systems. Our initial study reveals that task definition mechanisms and quality assurance mechanisms have the highest impact on the success of crowdsourcing projects, whereas task allocation mechanisms are less decisive

    Crowdsourcing in Software Development: A State-of-the-Art Analysis

    Get PDF
    As software development cycles become shorter and shorter, while software complexity increases and IT budgets stagnate, many companies are looking for new ways of acquiring and sourcing knowledge outside their boundaries. One promising solution to aggregate know-how and manage large distributed teams in software development is crowdsourcing. This paper analyzes the existing body of knowledge regarding crowdsourcing in software development. As a result, we propose a fundamental framework with five dimensions to structure the existing insights of crowdsourcing in the context of software development and to derive a research agenda to guide further research

    WHEN IS CROWDSOURCING ADVANTAGEOUS? THE CASE OF CROWDSOURCED SOFTWARE TESTING

    Get PDF
    Crowdsourcing describes a novel mode of value creation in which organizations broadcast tasks that have been previously performed in-house to a large magnitude of Internet users that perform these tasks. Although the concept has gained maturity and has proven to be an alternative way of problem-solving, an organizational cost-benefit perspective has largely been neglected by existing research. More specifically, it remains unclear when crowdsourcing is advantageous in comparison to alterna-tive governance structures such as in-house production. Drawing on crowdsourcing literature and transaction action cost theory, we present two case studies from the domain of crowdsourced software testing. We systematically analyze two organizations that applied crowdtesting to test a mobile appli-cation. As both organizations tested the application via crowdtesting and their traditional in-house testing, we are able to relate the effectiveness of crowdtesting and the associated costs to the effective-ness and costs of in-house testing. We find that crowdtesting is comparable in terms of testing quality and costs, but provides large advantages in terms of speed, heterogeneity of testers and user feedback as added value. We contribute to the crowdsourcing literature by providing first empirical evidence about the instances in which crowdsourcing is an advantageous way of problem solving

    How to Systematically Conduct Crowdsourced Software Testing? Insights from an Action Research Project

    No full text
    Nowadays, traditional testing approaches become less feasible - both economically and practicably - for several reasons, such as an increasingly dynamic environment, shorter product lifecycles, cost pressure, as well as a fast growing and increasingly segmented hardware market. With the surge towards new modes of value creation, crowdsourced software testing seems to be a promising solution to effectively solve these problems and was already applied in various software testing contexts. However, literature so far mostly neglected the perspective of an organization intending to crowdsource tasks. In this study, we present an ongoing action research project with a consortium of six companies and present a preliminary model for crowdsourced software testing in organizations. The model unfolds necessary activities, process changes, and the accompanied roles for crowdsourced software testing to enable organizations to systematically conduct such initiatives and illustrates how test departments can use crowdsourcing as a new tool in their department
    corecore