40 research outputs found

    Developing Effective Crowdsourcing Systems for Medical Diagnosis: Challenges and Recommendations

    Get PDF
    Diverse medical traditions follow different ‘grammar’ making encapsulation of varied body of knowledge challenging. However, the advances in information technology in the 21st century provide an opportunity to aggregate knowledge from varied cultures and medical traditions to tackle difficult health issues for which no cure has been developed. In addition to accumulating knowledge from wide-ranging sources, an ideal crowdsourcing system (CS) can benefit from the use of appropriate algorithms to choose the best solution. This conceptual paper examines existing classification of crowdsourcing and the various challenges involved with the capture and transmission of medical knowledge. It proposes the steps involved in developing an effective CS for dealing with medical problems. The ideal CS should involve the crowd and medical experts from all across the world, who together with the help of algorithms and other technology features in the CS could provide a useful solution for hard to solve health problems

    Given Enough Eyeballs, all Bugs are Shallow - A Literature Review for the Use of Crowdsourcing in Software Testing

    Get PDF
    Over the last years, the use of crowdsourcing has gained a lot of attention in the domain of software engineering. One key aspect of software development is the testing of software. Literature suggests that crowdsourced software testing (CST) is a reliable and feasible tool for manifold kinds of testing. Research in CST made great strides; however, it is mostly unstructured and not linked to traditional software testing practice and terminology. By conducting a literature review of traditional and crowdsourced software testing literature, this paper delivers two major contributions. First, it synthesizes the fields of crowdsourcing research and traditional software testing. Second, the paper gives a comprehensive overview over findings in CST-research and provides a classification into different software testing types

    Using Social Media for Government Passive Expert-Sourcing

    Get PDF
    Social Media have been initially used by government agencies for general public oriented -˜citizen-sourcing’. Though this enabled the collection of useful policy relevant information and knowledge from the general public, and provided valuable insights into their relevant perceptions, it would be quite useful if this could be combined with the collection of policy relevant information and knowledge from experts as well (-˜expert-sourcing’). In this paper, a passive expert-sourcing method based on social media, which has been developed in a European research project, is evaluated from a fundamental perspective: the wicked problems theory perspective. In particular, we investigate to what extent this method enables government agencies to collect high quality information concerning the main elements of important social problems to be addressed through public policies: particular issues posed, alternative interventions/ actions, and advantages/disadvantages of them; as well as to what extent there is consensus about these elements among different stakeholder groups. For this purpose data are collected through interviews with Members of the Greek Parliament. From their analysis interesting conclusions have been drawn about the strengths and weaknesses of this expert-sourcing method, as well as required impro-vements of it

    A Replication of Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research – Sometimes Preferable to Student Groups

    Get PDF
    This study is a replication of one of two studies found in “Beyond the Turk: Alternative platforms for crowdsourcing behavioral research” (Peer, Brandimarte, Samat, & Acquisti, 2017). We conduct an empirical analysis and comparison between two online crowdsourcing platforms, Amazon Mechanical Turk (MTurk) and Prolific Academic (ProA), as well as to a traditional student group. The online crowdsourcing platform (e.g., MTurk and others) used for years as a launching point for many types of microwork, including academic research. Today, MTurk has several competitors, including one that was built to focus on research tasks, ProA. Across the four segments, we reinforce the original study by finding both MTurk and ProA to provide inexpensive, reliable, and significantly faster methods of conducting surveys over traditional methods. Our results indicate that ProA provides superior service. By centering on research, ProA results are similar to MTurk’s. However, ProA’s response and completion rates, diversity, attention, naivety, reproducibility, and dishonest behavior are better

    Crowdsourcing as a platform for digital labor unions

    Get PDF
    Global complex supply chains have made it difficult to know the realities in factories. This structure obfuscates the networks, channels, and flows of communication between employers, workers, nongovernmental organizations and other vested intermediaries, creating a lack of transparency. Factories operate far from the brands themselves, often in developing countries where labor is cheap and regulations are weak. However, the emergence of social media and mobile technology has drawn the world closer together. Specifically, crowdsourcing is being used in an innovative way to gather feedback from outsourced laborers with access to digital platforms. This article examines how crowdsourcing platforms are used for both gathering and sharing information to foster accountability. We critically assess how these tools enable dialogue between brands and factory workers, making workers part of the greater conversation. We argue that although there are challenges in designing and implementing these new monitoring systems, these platforms can pave the path for new forms of unionization and corporate social responsibility beyond just rebranding.</p

    The Right to City in the Era of Crowdsourcing

    Get PDF
    This article explores the meaning and context of crowdsourcing at the municipal scale. In order to legitimately govern, local governments seek feedback and engagement from actors and bodies beyond the state. At the same time, crowdsourcing efforts are increasingly being adopted by entities – public and private – to digitally transform local services and processes. But how do we know what the “the right to the city” (RTTC) means when it comes to meaningful and participatory decision-making? And how do we know if participatory efforts called crowdsourcing—a practice articulated in a 2006 Wired article in the context of the tech sector—when policy ideas are sought at the municipal scale? Grounded in the ideals of Henri Lefebvre’s RTTC, the article brings together typologies of public participation to advance a conceptualization of ‘crowdsourcing’ specific to local governance. Applying this approach to a smart city initiative in Toronto, Canada, I argue that for crowdsourcing to be taken seriously as a means of inclusive and participatory decision-making that seeks to advance the RTTC, it must have connection to governance mechanisms that aim to integrate public perspectives into policy decisions. Where crowdsourcing is disconnected to decision-making processes, it is simply lip service, not meaningful participation

    Simulating the Cost of Cooperation: A Recipe for Collaborative Problem-Solving

    Get PDF
    Collective problem-solving and decision-making, along with other forms of collaboration online, are central phenomena within ICT. There had been several attempts to create a system able to go beyond the passive accumulation of data. However, those systems often neglect important variables such as group size, the difficulty of the tasks, the tendency to cooperate, and the presence of selfish individuals (free riders). Given the complex relations among those variables, numerical simulations could be the ideal tool to explore such relationships. We take into account the cost of cooperation in collaborative problem solving by employing several simulated scenarios. The role of two parameters was explored: the capacity, the group’s capability to solve increasingly challenging tasks coupled with the collective knowledge of a group, and the payoff, an individual’s own benefit in terms of new knowledge acquired. The final cooperation rate is only affected by the cost of cooperation in the case of simple tasks and small communities. In contrast, the fitness of the community, the difficulty of the task, and the groups sizes interact in a non-trivial way, hence shedding some light on how to improve crowdsourcing when the cost of cooperation is high
    corecore