49 research outputs found

    Quality in Crowdsourcing - How software quality is ensured in software crowdsourcing

    Get PDF
    Crowdsourcing is a relatively new technique which aims to make a specific group of people contribute solutions to simple tasks or problems that are published online by some organization. For this they get some reward, which is usually economic in nature. This technique can be embraced by any kind of company, and since it is done online, it can turn out to be a bit problematic, especially when it comes to software development, because the whole process is out of the developing company’s hands. Some quality problems may arise during the process, such as a great amount of non-serious submissions and people presenting vague solutions because they are just trying to get the monetary reward. In order to make crowdsourcing successful these problems need to be solved, and companies which use this method for software development need to have some quality assurance for their products. This study tries to find out how companies using crowdsourcing deal with these problems and how they try to ensure some levels of quality in the final product. What we found is that companies embracing crowdsourcing use several methods in order to ensure a certain level of quality, such as rating, spam filters and reviews. There are many similarities in the underlying functions behind the methods each company uses such as motivating participants or finding the best solutions. These methods are applied at different stages throughout the crowdsourcing process. The exact relationships between the current use of these methods and the effect on software quality are not entirely apparent

    Beyond Worst-Case Budget-Feasible Mechanism Design

    Get PDF

    On the Design and Analysis of Incentive Mechanisms in Network Science

    Get PDF
    With the rapid development of communication, computing and signal processing technologies, the last decade has witnessed a proliferation of emerging networks and systems, examples of which can be found in a wide range of domains from online social networks like Facebook or Twitter to crowdsourcing sites like Amazon Mechanical Turk or Topcoder; to online question and answering (Q&A) sites like Quora or Stack Overflow; all the way to new paradigms of traditional systems like cooperative communication networks and smart grid. Different from tradition networks and systems where uses are mandated by fixed and predetermined rules, users in these emerging networks have the ability to make intelligent decisions and their interactions are self-enforcing. Therefore, to achieve better system-wide performance, it is important to design effective incentive mechanisms to stimulate desired user behaviors. This dissertation contributes to the study of incentive mechanisms by developing game-theoretic frameworks to formally analyze strategic user behaviors in a network and systematically design incentive mechanisms to achieve a wide range of system objectives. In this dissertation, we first consider cooperative communication networks and propose a reputation based incentive mechanism to enforce cooperation among self-interested users. We analyze the proposed mechanism using indirect reciprocity game and theoretically demonstrate the effectiveness of reputation in cooperation stimulation. Second, we propose a contract-based mechanism to incentivize a large group of self-interested electric vehicles that have various preferences to act coordinately to provide ancillary services to the power grid. We derive the optimal contract that maximizes the system designer's profits and propose an online learning algorithm to effectively learn the optimal contract. Third, we study the quality control problem for microtask crowdsourcing from the perspective of incentives. After analyzing two widely adopted incentive mechanisms and showing their limitations, we propose a cost-effective incentive mechanism that can be employed to obtain high quality solutions from self-interested workers and ensure the budget constraint of requesters at the same time. Finally, we consider social computing systems where the value is created by voluntary user contributions and understanding how user participate is of key importance. We develop a game-theoretic framework to formally analyze the sequential decision makings of strategic users under the presence of complex externality. It is shown that our analysis is consistent with observations made from real-word user behavior data and can be applied to guide the design of incentive mechanisms in practice

    The Human Cloud in China: An Early Inquiry and Analysis

    Get PDF
    Human Cloud (HC) is an internet-enabled business innovation (HC is similar to the more common term, crowdsourcing; both of these terms are defined below). HC is growing rapidly in American and Europe. Separately—and away from the radar screen— HC is also growing very rapidly in China, primarily in Chinese language websites. Like its western counterparts, Chinese HC is also in its early stages. For example, the first Chinese Witkey and Crowdsourcing Conference was held in Guangzhou China in 2010. The three major HC platforms in China are Zhubajie, epweike and TaskCN, each with millions of workers. In this paper we examine the current HC industry in China and compare it with the western HC industry. We are seeking to understand the limitations of Chinese HC growth -- and the comparison helps sharpen this inquiry. We begin with an overview of the academic literature and the latest industry statistics, including data from the key Chinese HC platforms. Our methodology focused on exploring this new terrain: we examined the platforms and culled data from their published statistics and from studying their processes; we went through the process of placing work or bids; we examined secondary sources, such as user discussions about these platforms; we visited news sites, and we had some opportunistic email communication with sellers. Two key conceptual issues are identified from our analysis: First, task types are quite different in the Chinese HC. Second, trust/fraud issues drive many of the differences in Chinese HC versus western HC. We then propose a theoretical model for further study

    Augmented Human Machine Intelligence for Distributed Inference

    Get PDF
    With the advent of the internet of things (IoT) era and the extensive deployment of smart devices and wireless sensor networks (WSNs), interactions of humans and machine data are everywhere. In numerous applications, humans are essential parts in the decision making process, where they may either serve as information sources or act as the final decision makers. For various tasks including detection and classification of targets, detection of outliers, generation of surveillance patterns and interactions between entities, seamless integration of the human and the machine expertise is required where they simultaneously work within the same modeling environment to understand and solve problems. Efficient fusion of information from both human and sensor sources is expected to improve system performance and enhance situational awareness. Such human-machine inference networks seek to build an interactive human-machine symbiosis by merging the best of the human with the best of the machine and to achieve higher performance than either humans or machines by themselves. In this dissertation, we consider that people often have a number of biases and rely on heuristics when exposed to different kinds of uncertainties, e.g., limited information versus unreliable information. We develop novel theoretical frameworks for collaborative decision making in complex environments when the observers may include both humans and physics-based sensors. We address fundamental concerns such as uncertainties, cognitive biases in human decision making and derive human decision rules in binary decision making. We model the decision-making by generic humans working in complex networked environments that feature uncertainties, and develop new approaches and frameworks facilitating collaborative human decision making and cognitive multi-modal fusion. The first part of this dissertation exploits the behavioral economics concept Prospect Theory to study the behavior of human binary decision making under cognitive biases. Several decision making systems involving humans\u27 participation are discussed, and we show the impact of human cognitive biases on the decision making performance. We analyze how heterogeneity could affect the performance of collaborative human decision making in the presence of complex correlation relationships among the behavior of humans and design the human selection strategy at the population level. Next, we employ Prospect Theory to model the rationality of humans and accurately characterize their behaviors in answering binary questions. We design a weighted majority voting rule to solve classification problems via crowdsourcing while considering that the crowd may include some spammers. We also propose a novel sequential task ordering algorithm to improve system performance for classification in crowdsourcing composed of unreliable human workers. In the second part of the dissertation, we study the behavior of cognitive memory limited humans in binary decision making and develop efficient approaches to help memory constrained humans make better decisions. We show that the order in which information is presented to the humans impacts their decision making performance. Next, we consider the selfish behavior of humans and construct a unified incentive mechanism for IoT based inference systems while addressing the selfish concerns of the participants. We derive the optimal amount of energy that a selfish sensor involved in the signal detection task must spend in order to maximize a certain utility function, in the presence of buyers who value the result of signal detection carried out by the sensor. Finally, we design a human-machine collaboration framework that blends both machine observations and human expertise to solve binary hypothesis testing problems semi-autonomously. In networks featuring human-machine teaming/collaboration, it is critical to coordinate and synthesize the operations of the humans and machines (e.g., robots and physical sensors). Machine measurements affect human behaviors, actions, and decisions. Human behavior defines the optimal decision-making algorithm for human-machine networks. In today\u27s era of artificial intelligence, we not only aim to exploit augmented human-machine intelligence to ensure accurate decision making; but also expand intelligent systems so as to assist and improve such intelligence

    Augmenting the performance of image similarity search through crowdsourcing

    Get PDF
    Crowdsourcing is defined as “outsourcing a task that is traditionally performed by an employee to a large group of people in the form of an open call” (Howe 2006). Many platforms designed to perform several types of crowdsourcing and studies have shown that results produced by crowds in crowdsourcing platforms are generally accurate and reliable. Crowdsourcing can provide a fast and efficient way to use the power of human computation to solve problems that are difficult for machines to perform. From several different microtasking crowdsourcing platforms available, we decided to perform our study using Amazon Mechanical Turk. In the context of our research we studied the effect of user interface design and its corresponding cognitive load on the performance of crowd-produced results. Our results highlighted the importance of a well-designed user interface on crowdsourcing performance. Using crowdsourcing platforms such as Amazon Mechanical Turk, we can utilize humans to solve problems that are difficult for computers, such as image similarity search. However, in tasks like image similarity search, it is more efficient to design a hybrid human–machine system. In the context of our research, we studied the effect of involving the crowd on the performance of an image similarity search system and proposed a hybrid human–machine image similarity search system. Our proposed system uses machine power to perform heavy computations and to search for similar images within the image dataset and uses crowdsourcing to refine results. We designed our content-based image retrieval (CBIR) system using SIFT, SURF, SURF128 and ORB feature detector/descriptors and compared the performance of the system using each feature detector/descriptor. Our experiment confirmed that crowdsourcing can dramatically improve the CBIR system performance

    Crowdsourcing Accessibility: Human-Powered Access Technologies

    Get PDF
    People with disabilities have always engaged the people around them in order to circumvent inaccessible situations, allowing them to live more independently and get things done in their everyday lives. Increasing connectivity is allowing this approach to be extended to wherever and whenever it is needed. Technology can leverage this human work force to accomplish tasks beyond the capabilities of computers, increasing how accessible the world is for people with disabilities. This article outlines the growth of online human support, outlines a number of projects in this space, and presents a set of challenges and opportunities for this work going forward
    corecore