664 research outputs found
WHEN IS CROWDSOURCING ADVANTAGEOUS? THE CASE OF CROWDSOURCED SOFTWARE TESTING
Crowdsourcing describes a novel mode of value creation in which organizations broadcast tasks that have been previously performed in-house to a large magnitude of Internet users that perform these tasks. Although the concept has gained maturity and has proven to be an alternative way of problem-solving, an organizational cost-benefit perspective has largely been neglected by existing research. More specifically, it remains unclear when crowdsourcing is advantageous in comparison to alterna-tive governance structures such as in-house production. Drawing on crowdsourcing literature and transaction action cost theory, we present two case studies from the domain of crowdsourced software testing. We systematically analyze two organizations that applied crowdtesting to test a mobile appli-cation. As both organizations tested the application via crowdtesting and their traditional in-house testing, we are able to relate the effectiveness of crowdtesting and the associated costs to the effective-ness and costs of in-house testing. We find that crowdtesting is comparable in terms of testing quality and costs, but provides large advantages in terms of speed, heterogeneity of testers and user feedback as added value. We contribute to the crowdsourcing literature by providing first empirical evidence about the instances in which crowdsourcing is an advantageous way of problem solving
An Abstract Formal Basis for Digital Crowds
Crowdsourcing, together with its related approaches, has become very popular
in recent years. All crowdsourcing processes involve the participation of a
digital crowd, a large number of people that access a single Internet platform
or shared service. In this paper we explore the possibility of applying formal
methods, typically used for the verification of software and hardware systems,
in analysing the behaviour of a digital crowd. More precisely, we provide a
formal description language for specifying digital crowds. We represent digital
crowds in which the agents do not directly communicate with each other. We
further show how this specification can provide the basis for sophisticated
formal methods, in particular formal verification.Comment: 32 pages, 4 figure
Given Enough Eyeballs, all Bugs are Shallow - A Literature Review for the Use of Crowdsourcing in Software Testing
Over the last years, the use of crowdsourcing has gained a lot of attention in the domain of software engineering. One key aspect of software development is the testing of software. Literature suggests that crowdsourced software testing (CST) is a reliable and feasible tool for manifold kinds of testing. Research in CST made great strides; however, it is mostly unstructured and not linked to traditional software testing practice and terminology. By conducting a literature review of traditional and crowdsourced software testing literature, this paper delivers two major contributions. First, it synthesizes the fields of crowdsourcing research and traditional software testing. Second, the paper gives a comprehensive overview over findings in CST-research and provides a classification into different software testing types
CAN LAYMEN OUTPERFORM EXPERTS? THE EFFECTS OF USER EXPERTISE AND TASK DESIGN IN CROWDSOURCED SOFTWARE TESTING
In recent years, crowdsourcing has increasingly gained attention as a powerful sourcing mechanism for problem-solving in organizations. Depending on the type of activity addressed by crowdsourcing, the complexity of the tasks and the role of the crowdworkers may differ substantially. It is crucial that the tasks are designed and allocated according to the capabilities of the targeted crowds. In this pa-per, we outline our research in progress which is concerned with the effects of task complexity and user expertise on performance in crowdsourced software testing. We conduct an experiment and gath-er empirical data from expert and novice crowds that perform different software testing tasks of vary-ing degrees of complexity. Our expected contribution is twofold. For crowdsourcing in general, we aim at providing valuable insights for the process of framing and allocating tasks to crowds in ways that increase the crowdworkers’ performance. Secondly, we intend to improve the configuration of crowdsourced software testing initiatives. More precisely, the results are expected to show practition-ers what types of testing tasks should be assigned to which group of dedicated crowdworkers. In this vein, we deliver valuable decision support for both crowdsourcers and intermediaries to enhance the performance of their crowdsourcing initiatives
Gamifying Software Testing – A Focus on Strategy & Tools Development
This study aims to introduce new software testing strategies and tools with the aim of creating a more engaging and rewarding environment for software testers. For this purpose, gamification has been selected as a potential solution to raise the performances of testers. Empirical experiments were conducted to validate key factors and metrics influencing the design and development of a gamified software testing system
Recommended from our members
Leveraging the Power of Crowds: Automated Test Report Processing for The Maintenance of Mobile Applications
Crowdsourcing is an emerging distributed problem-solving model combining human and machine computation. It collects intelligence and knowledge from a large and diverse workforce to complete complex tasks. In the software engineering domain, crowdsourced techniques have been adopted to facilitate various tasks, such as design, testing, debugging, development, and so on. Specifically, in crowdsourced testing, crowdsourced workers are given testing tasks to perform and submit their feedback in the form of test reports. One of the key advantages of crowdsourced testing is that it is capable of providing engineers software engineers with domain knowledge and feedback from a large number of real users. Based on diverse software and hardware settings of these users, engineers can bugs that are not caught by traditional quality assurance techniques. Such benefits are particularly ideal for mobile application testing, which needs rapid development-and-deployment iterations and support diverse execution environments. However, crowdsourced testing naturally generates an overwhelming number of crowdsourced test reports, and inspecting such a large number of reports becomes a time-consuming yet inevitable task. This dissertation presents a series of techniques, tools and experiments to assist in crowdsourced report processing. These techniques are designed for improving this task in multiple aspects: 1. prioritizing crowdsourced report to assist engineers in finding as many unique bugs as possible, and as quickly as possible; 2. grouping crowdsourced report to assist engineers in identifying the representative ones in a short time; 3. summarizing the duplicate reports to provide engineers with a concise and accurate understanding of a group of reports; In the first step, I present a text-analysis-based technique to prioritize test reports for manual inspection. This technique leverages two key strategies: (1) a diversity strategy to help developers inspect a wide variety of test reports and to avoid duplicates and wasted effort on falsely classified faulty behavior, and (2) a risk-assessment strategy to help developers identify test reports that may be more likely to be fault-revealing based on past observations.Together, these two strategies form our technique to prioritize test reports in crowdsourced testing. Moreover, in the mobile testing domain, test reports often consist of more screenshots and shorter descriptive text, and thus text-analysis-based techniques may be ineffective or inapplicable. The shortage and ambiguity of natural-language text information and the well-defined screenshots of activity views within mobile applications motivate me to propose a novel technique based on using image understanding for multi-objective test-report prioritization. This technique employs the Spatial Pyramid Matching (SPM) technique to measure the similarity of the screenshots, and apply the natural-language processing technique to measure the distance between the text of test reports. Next, I design and implement CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report.I validate all of these techniques on industrial data by collaborating with several companies. The results show my techniques can improve both the efficiency and effectiveness of crowdsourced test report processing. Also, I suggest settings for different usage scenarios and discuss future research directions
Crowdsourcing in Software Development: A State-of-the-Art Analysis
As software development cycles become shorter and shorter, while software complexity increases and IT budgets stagnate, many companies are looking for new ways of acquiring and sourcing knowledge outside their boundaries. One promising solution to aggregate know-how and manage large distributed teams in software development is crowdsourcing. This paper analyzes the existing body of knowledge regarding crowdsourcing in software development. As a result, we propose a fundamental framework with five dimensions to structure the existing insights of crowdsourcing in the context of software development and to derive a research agenda to guide further research
The Role of Time Pressure in Software Projects: A Literature Review and Research Agenda
The finding that deadlines affect work in organizational settings holds particularly true for software projects, which are usually conducted under time pressure. While the role of time pressure in software projects has been extensively studied, the findings yielded are diverse. Some authors report a positive relationship between time pressure and software project outcomes, while others find it to be negative or to follow an inverted U-shape pattern. Since many aspects concerning time pressure remain unexplained and its relationship with project outcomes is more complex than it might seem at first glance, we synthesize pertinent research to develop a research agenda aimed at improving the understanding in this domain. Our literature review shows a variety of time pressure conceptualizations, research approaches, and research contexts. The results reveal an inconsistent picture of time pressure’s impact on software projects. Our research agenda includes five themes we deem beneficial to consider in future research
Machines as Teammates: A Collaboration Research Agenda
Humans will soon need to adapt to a collaborative setting in which technology becomes a smart collaboration partner that works with a group to achieve its goals. It is therefore time for collaboration researchers to explore the vast opportunities afforded by smart technology and to test its utility for enhancing team processes and outcomes. In this paper, we take a long view on the implications of smart technology for collaboration process design, and propose a research agenda for the next decade of collaboration research. We create a reference model to frame the research agenda
- …