2,036 research outputs found

    Collaboration among Crowdsourcees: Towards a Design Theory for Collaboration Process Design

    Get PDF
    Crowdsourcing is used for collaborative problem solving in different domains. The key to optimal solutions is mostly found by collaboration among the crowdsourcees. The current state of research on this field addresses this topic mainly with an explorative focus on a specific domain, such as idea contests. We gather and analyze the contributions from the different domains on collaboration in crowdsourcing. We present a framework for a general collaboration process model for crowdsourcing. To derive this framework, we conducted a literature review and set up a database, which assigns the literature to the process steps that we identified from interaction patterns in the literature. The framework considers phases before and after the collaboration among crowdsourcees and includes relevant activities that can influence the collaboration process. This paper contributes to a deeper understanding of the interaction among crowdsourcees and provides crowdsourcers with grounding for the informed design of effective collaborative crowdsourcing processes.

    Waking Up a Sleeping Giant: Lessons from Two Extended Pilots to Transform Public Organizations by Internal Crowdsourcing

    Get PDF
    Digital transformation is a main driver for change, evolution, and disruption in organizations. As digital transformation is not solely determined by technological advancements, public environments necessitate changes in organizational practice and culture alike. A mechanism that seeks to realize employee engagement to adopt innovative modes of problem-solving is internal crowdsourcing, which flips the mode of operation from top-down to bottom-up. This concept is thus disrupting public organizations, as it heavily builds on IT-enabled engagement platforms that overcome the barriers of functional expertise and routine processes. Within this paper, we reflect on two design science projects that were piloted for six months within public organizations. We derive insights on the sociotechnical effects of internal crowdsourcing on organizational culture, social control, individual resources, motivation, and empowerment. Furthermore, using social cognitive theory, we propose design propositions for internal crowdsourcing, that guide future research and practice-oriented approaches to enable innovation in public organizations

    Crowdsourcing Data Science: A Qualitative Analysis of Organizations’ Usage of Kaggle Competitions

    Get PDF
    In light of the ongoing digitization, companies accumulate data, which they want to transform into value. However, data scientists are rare and organizations are struggling to acquire talents. At the same time, individuals who are interested in machine learning are participating in competitions on data science internet platforms. To investigate if companies can tackle their data science challenges by hosting data science competitions on internet platforms, we conducted ten interviews with data scientists. While there are various perceived benefits, such as discussing with participants and learning new, state of the art approaches, these competitions can only cover a fraction of tasks that typically occur during data science projects. We identified 12 factors within three categories that influence an organization’s perceived success when hosting a data science competition

    Accurator: Nichesourcing for Cultural Heritage

    Full text link
    With more and more cultural heritage data being published online, their usefulness in this open context depends on the quality and diversity of descriptive metadata for collection objects. In many cases, existing metadata is not adequate for a variety of retrieval and research tasks and more specific annotations are necessary. However, eliciting such annotations is a challenge since it often requires domain-specific knowledge. Where crowdsourcing can be successfully used for eliciting simple annotations, identifying people with the required expertise might prove troublesome for tasks requiring more complex or domain-specific knowledge. Nichesourcing addresses this problem, by tapping into the expert knowledge available in niche communities. This paper presents Accurator, a methodology for conducting nichesourcing campaigns for cultural heritage institutions, by addressing communities, organizing events and tailoring a web-based annotation tool to a domain of choice. The contribution of this paper is threefold: 1) a nichesourcing methodology, 2) an annotation tool for experts and 3) validation of the methodology and tool in three case studies. The three domains of the case studies are birds on art, bible prints and fashion images. We compare the quality and quantity of obtained annotations in the three case studies, showing that the nichesourcing methodology in combination with the image annotation tool can be used to collect high quality annotations in a variety of domains and annotation tasks. A user evaluation indicates the tool is suited and usable for domain specific annotation tasks

    Given Enough Eyeballs, all Bugs are Shallow - A Literature Review for the Use of Crowdsourcing in Software Testing

    Get PDF
    Over the last years, the use of crowdsourcing has gained a lot of attention in the domain of software engineering. One key aspect of software development is the testing of software. Literature suggests that crowdsourced software testing (CST) is a reliable and feasible tool for manifold kinds of testing. Research in CST made great strides; however, it is mostly unstructured and not linked to traditional software testing practice and terminology. By conducting a literature review of traditional and crowdsourced software testing literature, this paper delivers two major contributions. First, it synthesizes the fields of crowdsourcing research and traditional software testing. Second, the paper gives a comprehensive overview over findings in CST-research and provides a classification into different software testing types

    How Does the Crowdsourcing Experience Impact Participants\u27 Engagement? An Empirical Illustration

    Get PDF
    A largely neglected aspect in crowdsourcing research is the “Crowdsourcing Experience” itself, which every crowdsourcee is necessarily exposed to throughout the IT-mediated interaction process, potentially stimulating engagement towards the crowdsourcer. Hence, the crowdsourcees’ engagement process is conceptualized and illustrated with empirical findings from a pilot case. It exemplifies that crowdsourcing has the potential to generate high levels of attitudinal and behavioral engagement, depending on prior experiences and perceived cognitions and emotions. Related stimuli characteristics are identified, which serve as a first indication of the foundations of the engagement process. This study offers IS-researchers first insights on the so far under-researched topic of IT-enabled engagement processes between individuals and entities

    Measuring and improving data quality of media collections for professional tasks

    Get PDF
    Carrying out research tasks on data collections is hampered, or even made impossible, by data quality issues of different types, such as incompleteness or inconsistency, and severity. We identify research tasks carried out by professional users of data collections that are hampered by inherent quality issues. We investigate what types of issues exist and how they influence these research tasks. To measure the quality perceived by professional users, we develop a quality metric. This allows us to measure the suitability of the data quality for a chosen user ta

    Workers’ Task Choice in Crowdsourcing and Human Computation Markets

    Get PDF
    In human computation systems, humans and computers work together to solve hard problems. Many of these systems use crowdsourcing marketplaces to recruit workers. However, the task selection process of crowdsourcing workers is still unclear. We therefore outline this process and propose a structural model showing the criteria that workers use to choose tasks. The model is based on the person-job fit theory, which includes the measures demands-abilities fit and needs-supplies fit, in order to explain the work intention. We adapt the needs-supplies fit to the specific requirements of crowdsourcing markets by adding concepts for payment fit, enjoyment fit, and time fit. We further assume that the task presentation can have an effect on work intention. In this research-in-progress paper, we present our measures and experimental design as well as our newly developed method for participants’ recruitment. Our work could have strong implications for organizations using crowdsourcing marketplaces
    • 

    corecore