1,250 research outputs found

    Making Task Recommendations in Crowdsourcing Contests

    Get PDF
    Crowdsourcing contests have emerged as an innovative way for firms to solve business problems by acquiring ideas from participants external to the firm. To facilitate such contests a number of crowdsourcing platforms have emerged in recent years. A crowdsourcing platform provides a two-sided marketplace with one set of members (seekers) posting tasks, and another set of members (solvers) working on these tasks and submitting solutions. As crowdsourcing platforms attract more seekers and solvers, the number of tasks that are open at any time can become quite large. Consequently, solvers search only a limited number of tasks before deciding which one(s) to participate in, often examining only those tasks that appear on the first couple of pages of the task listings. This kind of search behavior has potentially detrimental implications for all parties involved: (i) solvers typically end up participating in tasks they are less likely to win relative some other tasks, (ii) seekers receive solutions of poorer quality compared to a situation where solvers are able to find tasks that they are more likely to win, and (iii) when seekers are not satisfied with the outcome, they may decide to leave the platform; therefore, the platform could lose revenues in the short term and market share in the long term. To counteract these concerns, platforms can provide recommendations to solvers in order to reduce their search costs for identifying the most preferable tasks. This research proposes a methodology to develop a system that can recommend tasks to solvers who wish to participate in crowdsourcing contests. A unique aspect of this environment is that it involves competition among solvers. The proposed approach explicitly models the competition that a solver would face in each open task. The approach makes recommendations based on the probability of the solver winning an open task. A multinomial logit model has been developed to estimate these winning probabilities. We have validated our approach using data from a real crowdsourcing platform

    Recomendation systems and crowdsourcing: a good wedding for enabling innovation? Results from technology affordances and costraints theory

    Get PDF
    Recommendation Systems have come a long way since their first appearance in the e-commerce platforms.Since then, evolved Recommendation Systems have been successfully integrated in social networks. Now its time to test their usability and replicate their success in exciting new areas of web -enabled phenomena. One of these is crowdsourcing. Research in the IS field is investigating the need, benefits and challenges of linking the two phenomena. At the moment, empirical works have only highlighted the need to implement these techniques for tasks assignment in crowdsourcing distributed work platforms and the derived benefits for contributors and firms. We review the variety of the tasks that can be crowdsourced through these platforms and theoretically evaluate the efficiency of using RS to recommend a task in creative crowdsourcing platforms. Adopting a Technology Affordances and Constraints Theory, an emerging perspective in the Information Systems (IS) literature to understand technology use and consequences, we anticipate the tensions that this implementation can generate

    Problem Specification in Crowdsourcing Contests: A Natural Experiment

    Get PDF
    Problem specification is a key aspect in crowdsourcing contests through which seekers convey their requirements and taste for the desired submissions. Hence, it is important to understand how problem specification should be framed to achieve better crowdsourcing contest outcomes. In this empirical study, we investigate the effects of a relatively more structured problem specification on contest quantity, solver quantity, and idea quality. We leverage a natural experiment set up on a major crowdsourcing contest platform where the problem specification of logo design contests changed from open-ended to structured. Our results show that the specification change impacts both seekers and solvers. Specifically, the number of contests increases after the change but solver quantity and idea quality in the respective contests tend to be lower. We discuss the theoretical and practical contributions of this research

    The Challenges of Knowledge Combination in ML-based Crowdsourcing – The ODF Killer Shrimp Challenge using ML and Kaggle

    Get PDF
    Organizations are increasingly using digital technologies, such as crowdsourcing platforms and machine learning, to tackle innovation challenges. These technologies often require the combination of heterogeneous technical and domain-specific knowledge from diverse actors to achieve the organization’s innovation goals. While research has focused on knowledge combination for relatively simple tasks on crowdsourcing platforms and within ML-based innovation, we know little about how knowledge is combined in emerging innovation approaches incorporating ML and crowdsourcing to solve domain-specific innovation challenges. Thus, this paper investigates the following: What are the challenges to knowledge combination in domain-specific ML-based crowdsourcing? We conducted a case study of an environmental challenge – how to use ML to predict the spread of a marine invasive species, led by the Swedish consortium, Ocean Data Factory Sweden using the crowdsourcing platform Kaggle. After discussing our results, we end the paper with recommendations on how to integrate crowdsourcing into domain-specific digital innovation processes

    Eyes on the Prize: Increasing the Prize May Not Benefit the Contest Organizer in Multiple Online Contests

    Get PDF
    Given the proliferation of online platforms for crowdsourcing contests, we address the inconsistencies in the extant literature about the behavioral effects of increasing the prize awarded by contest organizers. We endeavor to resolve these inconsistencies by analyzing user behavior in a highly controlled experimental setting in which users can participate (by exerting real effort rather than stated effort) in multiple online contests that vary only in their prizes. The analysis of the behavior of 731 active participants in our first experiment showed that both participation and effort were non-monotonic with the prize, that the low-prize contest was the most effective for the organizers, and that increasing the prize of the low-prize or high-prize contest by 50% actually decreased the benefits for organizers. Our findings advance theory by providing insight into when and why extrinsic incentives fail to produce the desired effects in crowdsourcing contests

    Understanding the Effect of Task Descriptions on User Participation in Crowdsourcing Contests: A Linguistic Style Perspective

    Get PDF
    Many employers are struggling with how to deliver attractive tasks on crowdsourcing platforms, where users can be effectively integrated into a company’s tasks. In this study, the linguistic style of crowdsourcing task descriptions is investigated, and an analysis is conducted on how such linguistic styles are related to a task description’s success in attracting participants. Based on uncertainty reduction theory as well as source credibility theory, an empirical analysis of 2,014 designing contests demonstrates that certain linguistic styles will reduce the uncertainty perceived by crowdsourcing solvers and increase employers’ credibility, generating positive effects on participation. It is also found that these observed effects are moderated by the magnitude of the rewards offered for completing crowdsourcing tasks. The results of this study inform the theories concerned on crowdsourcing participation, linguistics, as well as psychological processes, while offering the industry insight on how to describe their own crowdsourcing tasks better

    Fair play:Perceived fairness in crowdsourcing competitions and the customer relationship-related consequences

    Get PDF
    TeleRehab enables the rehabilitation services to be delivered in distance by providing information exchange between patient with disabilities and the clinical professionals. The readiness step in any adoption of healthcare services should always be one of the requirements for a successful implementation of an innovation. However, little scholarly has been undertaken to study its influence on TeleRehab and the various barrier factors that influence its adoption. This research explores the barrier factors that influence the readiness of healthcare institution to adopt TeleRehab. This paper presents a semi-structured interview involving 23 clinical professionals of a case study on the issues of TeleRehab readiness in one rehabilitation centre in Malaysia. By applying thematic analysis, the study uncovers seven barriers that affect the TeleRehab readiness. This includes barriers of no urgency to change, less awareness, less involvement in planning, not enough exposure on e-Healthcare knowledge, resistance to change, low usage of hardware and software, and less connectivity. The study contributes to both TeleRehab management and technology readiness research in hospitals

    Crowdsourcing Contests: Understanding the Effect of Environment and Organization Specific Factors on Sustained Participation

    Get PDF
    Crowdsourcing has increasingly become a recognized problem-solving mechanism for organizations by outsourcing the problem to an undefined crowd of people. The success of crowdsourcing depends on the sustained participation and quality-submissions of the individuals. Yet, little is known about the environment-specific and organization-specific factors that influence individuals’ continued participation in these contests. We address this research gap, by conducting an empirical study using data from an online crowdsourcing contest platform, Kaggle, which delivers data science and machine learning solutions to its clients. The findings show the statistically significant effects of structural capital, familiarity with organization, and experience with the organization on individuals’ sustained participation in crowdsourcing contests. This research contributes to the literature by identifying the environment-specific and organization-specific factors that influence individuals’ sustained participation in crowdsourcing contests. Moreover, this study offers guidance to organizations that host a crowdsourcing platform to design, implement, and operate successful crowdsourcing contest platforms
    • 

    corecore