13,892 research outputs found

    The Evidence Hub: harnessing the collective intelligence of communities to build evidence-based knowledge

    Get PDF
    Conventional document and discussion websites provide users with no help in assessing the quality or quantity of evidence behind any given idea. Besides, the very meaning of what evidence is may not be unequivocally defined within a community, and may require deep understanding, common ground and debate. An Evidence Hub is a tool to pool the community collective intelligence on what is evidence for an idea. It provides an infrastructure for debating and building evidence-based knowledge and practice. An Evidence Hub is best thought of as a filter onto other websites — a map that distills the most important issues, ideas and evidence from the noise by making clear why ideas and web resources may be worth further investigation. This paper describes the Evidence Hub concept and rationale, the breath of user engagement and the evolution of specific features, derived from our work with different community groups in the healthcare and educational sector

    Harnessing Collaborative Technologies: Helping Funders Work Together Better

    Get PDF
    This report was produced through a joint research project of the Monitor Institute and the Foundation Center. The research included an extensive literature review on collaboration in philanthropy, detailed analysis of trends from a recent Foundation Center survey of the largest U.S. foundations, interviews with 37 leading philanthropy professionals and technology experts, and a review of over 170 online tools.The report is a story about how new tools are changing the way funders collaborate. It includes three primary sections: an introduction to emerging technologies and the changing context for philanthropic collaboration; an overview of collaborative needs and tools; and recommendations for improving the collaborative technology landscapeA "Key Findings" executive summary serves as a companion piece to this full report

    A survey of task-oriented crowdsourcing

    Get PDF
    Since the advent of artificial intelligence, researchers have been trying to create machines that emulate human behaviour. Back in the 1960s however, Licklider (IRE Trans Hum Factors Electron 4-11, 1960) believed that machines and computers were just part of a scale in which computers were on one side and humans on the other (human computation). After almost a decade of active research into human computation and crowdsourcing, this paper presents a survey of crowdsourcing human computation systems, with the focus being on solving micro-tasks and complex tasks. An analysis of the current state of the art is performed from a technical standpoint, which includes a systematized description of the terminologies used by crowdsourcing platforms and the relationships between each term. Furthermore, the similarities between task-oriented crowdsourcing platforms are described and presented in a process diagram according to a proposed classification. Using this analysis as a stepping stone, this paper concludes with a discussion of challenges and possible future research directions.This work is part-funded by ERDF-European Regional Development Fund through the COMPETE Programme (Operational Programme for Competitiveness) and by National Funds through the FCT-Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) within the Ph.D. Grant SFRH/BD/70302/2010 and by the Projects AAL4ALL (QREN11495), World Search (QREN 13852) and FCOMP-01-0124-FEDER-028980 (PTDC/EEI-SII/1386/2012). The authors also thank Jane Boardman for her assistance proof reading the document.info:eu-repo/semantics/publishedVersio

    Information Systems for “Wicked Problems” - Research at the Intersection of Social Media and Collective Intelligence

    Get PDF
    The objective of this commentary is to propose fruitful research directions built upon the reciprocal interplay of social media and collective intelligence. We focus on “wicked problems” – a class of problems that Introne et al. (KĂŒnstl. Intell. 27:45–52, 2013) call “prob- lems for which no single computational formulation of the problem is suffi- cient, for which different stakeholders do not even agree on what the prob- lem really is, and for which there are no right or wrong answers, only answers that are better or worse from differ- ent points of view”. We argue that in- formation systems research in partic- ular can aid in designing appropriate systems due to benefits derived from the combined perspectives of both so- cial media and collective intelligence. We document the relevance and time- liness of social media and collective in- telligence for business and information systems engineering, pinpoint needed functionality of information systems for wicked problems, describe related re- search challenges, highlight prospec- tive suitable methods to tackle those challenges, and review examples of initial results

    Information Systems for “Wicked Problems” – Proposing Research at the Intersection of Social Media and Collective Intelligence

    Get PDF
    The objective of this commentary is to propose some fruitful research direction built upon the reciprocal interplay of social media and collective intelligence. We focus on “wicked problems” — a class of what Introne et al. 2013 call “problems for which no single computational formulation of the problem is sufficient, for which different stakeholders do not even agree on what the problem really is, and for which there are no right or wrong answers, only answers that are better or worse from different points of view”. We argue that information systems research in particular can aid in designing appropriate systems due to benefits derived from the combined perspectives of both social media and collective intelligence. We document the relevance and timeliness of social media and collective intelligence for business and information systems engineering, pinpoint needed functionality of information systems for wicked problems, describe related research challenges, highlight prospective suitable methods to tackle those challenges, and review examples of initial results

    Dynamics of Content Quality in Collaborative Knowledge Production

    Full text link
    We explore the dynamics of user performance in collaborative knowledge production by studying the quality of answers to questions posted on Stack Exchange. We propose four indicators of answer quality: answer length, the number of code lines and hyperlinks to external web content it contains, and whether it is accepted by the asker as the most helpful answer to the question. Analyzing millions of answers posted over the period from 2008 to 2014, we uncover regular short-term and long-term changes in quality. In the short-term, quality deteriorates over the course of a single session, with each successive answer becoming shorter, with fewer code lines and links, and less likely to be accepted. In contrast, performance improves over the long-term, with more experienced users producing higher quality answers. These trends are not a consequence of data heterogeneity, but rather have a behavioral origin. Our findings highlight the complex interplay between short-term deterioration in performance, potentially due to mental fatigue or attention depletion, and long-term performance improvement due to learning and skill acquisition, and its impact on the quality of user-generated content

    Seeding Knowledge Solutions Before, During, and After

    Get PDF
    {Excerpt} In the age of competence, one must learn before, during, and after the event. Knowledge solutions lie in the areas of strategy development, management techniques, collaboration mechanisms, knowledge sharing and learning, and knowledge capture and storage. Competence is the state or quality of being adequately or well qualified to deliver a specific task, action, or function successfully. It is also a specific range of knowledge, skills, or behaviors utilized to improve performance. Today, sustainable competitive advantage derives from strenuous efforts to identify, cultivate, and exploitan organization’s core competencies, the tangible fruits of which are composite packages of products and services that anticipate and meet demand. (Yesteryear, instead of strengthening the roots of competitiveness, the accent was placed on business units. Innately, given their defining characteristics, business units under-invest in core competencies, incarcerate resources, and bind innovation—when they do not stifle it.) Core competencies are integrated and harmonized abilities that provide potential access to markets; create and deliver value to audiences, clients, and partners there; and are difficult for competitors to imitate. They depend on relentless design of strategic architecture, deployment of competence carriers, and commitment to collaborate across silos. They are the product of collective learning
    • 

    corecore