925 research outputs found
Quality in Crowdsourcing - How software quality is ensured in software crowdsourcing
Crowdsourcing is a relatively new technique which aims to make a specific group of people contribute solutions to simple tasks or problems that are published online by some organization. For this they get some reward, which is usually economic in nature. This technique can be embraced by any kind of company, and since it is done online, it can turn out to be a bit problematic, especially when it comes to software development, because the whole process is out of the developing company’s hands. Some quality problems may arise during the process, such as a great amount of non-serious submissions and people presenting vague solutions because they are just trying to get the monetary reward. In order to make crowdsourcing successful these problems need to be solved, and companies which use this method for software development need to have some quality assurance for their products. This study tries to find out how companies using crowdsourcing deal with these problems and how they try to ensure some levels of quality in the final product. What we found is that companies embracing crowdsourcing use several methods in order to ensure a certain level of quality, such as rating, spam filters and reviews. There are many similarities in the underlying functions behind the methods each company uses such as motivating participants or finding the best solutions. These methods are applied at different stages throughout the crowdsourcing process. The exact relationships between the current use of these methods and the effect on software quality are not entirely apparent
Recommended from our members
Guide to Crowdsourcing
The term “crowdsourcing” has been around for a decade. Although Wired writer Jeff Howe coined it in 2006, the ways in which news organizations define and employ it today vary enormously.
This guide is organized around a specific journalism-related definition of crowdsourcing and provides a new typology designed to help practitioners and researchers understand the different ways crowdsourcing is being used both inside and outside newsrooms. This typology is explored via interviews and case studies.
The research shows that crowdsourcing is credited with helping to create amazing acts of journalism. It has transformed newsgathering by introducing unprecedented opportunities for attracting sources with new voices and information, allowed news organizations to unlock stories that otherwise might not have surfaced, and created opportunities for news organizations to experiment with the possibilities of engagement just for the fun of it.
Certainly, though, crowdsourcing can be high-touch and high-energy, and not all projects work the first time.
To be sure, crowdsourcing businesses are flourishing outside of journalism. But within the news industry, wider systemic adoption may depend on more than enthusiasm from experienced practitioners and accolades from sources thrilled by the outreach
Can a Machine Replace Humans in Building Regular Expressions? A Case Study
Regular expressions are routinely used in a variety of different application domains. But building a regular expression involves a considerable amount of skill, expertise, and creativity. In this work, the authors investigate whether a machine can surrogate these qualities and automatically construct regular expressions for tasks of realistic complexity. They discuss a large-scale experiment involving more than 1,700 users on 10 challenging tasks. The authors compare the solutions constructed by these users to those constructed by a tool based on genetic programming that they recently developed and made publicly available. The quality of automatically constructed solutions turned out to be similar to the quality of those constructed by the most skilled user group; the time for automatic construction was likewise similar to the time required by human users
Given Enough Eyeballs, all Bugs are Shallow - A Literature Review for the Use of Crowdsourcing in Software Testing
Over the last years, the use of crowdsourcing has gained a lot of attention in the domain of software engineering. One key aspect of software development is the testing of software. Literature suggests that crowdsourced software testing (CST) is a reliable and feasible tool for manifold kinds of testing. Research in CST made great strides; however, it is mostly unstructured and not linked to traditional software testing practice and terminology. By conducting a literature review of traditional and crowdsourced software testing literature, this paper delivers two major contributions. First, it synthesizes the fields of crowdsourcing research and traditional software testing. Second, the paper gives a comprehensive overview over findings in CST-research and provides a classification into different software testing types
Creating a Live, Public Short Message Service Corpus: The NUS SMS Corpus
Short Message Service (SMS) messages are largely sent directly from one
person to another from their mobile phones. They represent a means of personal
communication that is an important communicative artifact in our current
digital era. As most existing studies have used private access to SMS corpora,
comparative studies using the same raw SMS data has not been possible up to
now. We describe our efforts to collect a public SMS corpus to address this
problem. We use a battery of methodologies to collect the corpus, paying
particular attention to privacy issues to address contributors' concerns. Our
live project collects new SMS message submissions, checks their quality and
adds the valid messages, releasing the resultant corpus as XML and as SQL
dumps, along with corpus statistics, every month. We opportunistically collect
as much metadata about the messages and their sender as possible, so as to
enable different types of analyses. To date, we have collected about 60,000
messages, focusing on English and Mandarin Chinese.Comment: It contains 31 pages, 6 figures, and 10 tables. It has been submitted
to Language Resource and Evaluation Journa
Towards Computational Assessment of Idea Novelty
In crowdsourcing ideation websites, companies can easily collect large amount of ideas. Screening through such volume of ideas is very costly and challenging, necessitating automatic approaches. It would be particularly useful to automatically evaluate idea novelty since companies commonly seek novel ideas. Three computational approaches were tested, based on Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA) and term frequency–inverse document frequency (TF-IDF), respectively. These three approaches were used on three set of ideas and the computed idea novelty was compared with human expert evaluation. TF-IDF based measure correlated better with expert evaluation than the other two measures. However, our results show that these approaches do not match human judgement well enough to replace it
Ranking for Web Data Search Using On-The-Fly Data Integration
Ranking - the algorithmic decision on how relevant an information artifact is for a given information need and the sorting of artifacts by their concluded relevancy - is an integral part of every search engine. In this book we investigate how structured Web data can be leveraged for ranking with the goal to improve the effectiveness of search. We propose new solutions for ranking using on-the-fly data integration and experimentally analyze and evaluate them against the latest baselines
An investigation into feature effectiveness for multimedia hyperlinking
The increasing amount of archival multimedia content available online is creating increasing opportunities for users who are interested in exploratory search behaviour such as browsing. The user experience with online collections could therefore be improved by enabling navigation and recommendation within multimedia archives, which can be supported by allowing a user to follow a set of hyperlinks created within or across documents. The main goal of this study is to compare the performance of dierent multimedia features for automatic hyperlink generation. In our work we construct multimedia hyperlinks by indexing and searching textual and visual features extracted from the blip.tv dataset. A user-driven evaluation strategy is then proposed by applying the Amazon Mechanical Turk (AMT) crowdsourcing platform, since we believe that AMT workers represent a good example of "real world" users. We conclude that textual features exhibit better performance than visual features for multimedia hyperlink construction. In general, a combination of ASR transcripts and metadata provides the best results
- …