4,006 research outputs found

    Validating results of human-based electronic services leveraging multiple reviewers

    Get PDF
    Crowdsourcing in the form of human-based electronic services (people services) provides a powerful way of outsourcing so called micro tasks to large groups of people over the Internet in order to increase the scalability and productivity of business processes. However, quality management of the work results continues to be a challenge. Most existing approaches assume that multiple redundant results delivered by different people for the same task can be aggregated in order to achieve a reliable result, but for a lot of task types an automatic aggregation or comparison of task results is not possible. Also, cost considerations and estimators for outgoing quality have experienced little attention. Our majority review approach addresses these challenges by leveraging the crowd not only for delivering work results but also for validating the results delivered by others. An evaluation in a business context confirms that the approach is capable of gaining reliable results

    Worker Perception of Quality Assurance Mechanisms in Crowdsourcing and Human Computation Markets

    Full text link
    Many human computation systems utilize crowdsourcing marketplaces to recruit workers. Because of the open nature of these marketplaces, requesters need to use appropriate quality assurance mechanisms to guarantee high quality results. Previous research has mostly focused on the statistical aspects of quality assurance. Instead, we analyze the worker perception of five quality assurance mechanisms (Qualification Test, Qualification Restriction, Gold Standard, Majority Vote, Validating Review) according to subjective (fairness, offense, benefit) and objective (necessity, accuracy, cost) criteria. Based on theory from related areas like labor psychology, we develop a conceptual model and test it with a survey on Mechanical Turk. Our results show big differences in perception, especially with respect to Majority Vote which is rated low by workers. On the basis of these results, we show implications for theory and give requesters on crowdsourcing markets the advice to integrate the worker view when selecting an appropriate quality assurance mechanism

    EFFICIENT QUALITY MANAGEMENT OF HUMAN-BASED ELECTRONIC SERVICES LEVERAGING GROUP DECISION MAKING

    Get PDF
    Human-based electronic services (people services) provide a powerful way of outsourcing tasks to a large crowd of remote workers over the Internet. Because of the limited control over the workforce in a potentially globally distributed environment, efficient quality management mechanisms are a prerequisite for successful implementation of the people service concept in a business context. Research has shown that multiple redundant results delivered by different workers can be aggregated in order to achieve a reliable result. However, existing implementations of this approach are highly inefficient as they multiply the effort for task execution and are not able to guarantee a certain quality level. Our weighted majority vote (WMV) approach addresses this issue by dynamically adjusting the level of redundancy depending on the historical error rates of the involved workers and the level of agreement among them. A practical evaluation in an OCR scenario demonstrates that the approach is capable of gaining reliable results at significantly lower costs compared to existing procedures

    IODA - an Interactive Open Document Architecture

    Get PDF
    AbstractObjective of the proposed architecture is to enable representing an electronic document as a multi-layered structure of executable digital objects, which is extensible and without a need to support any particular formats or user interfaces. IODA layers are intended to reflect document content organization levels rather then system abstraction or functional levels, as in software architecture models

    DEEP: a provenance-aware executable document system

    Get PDF
    The concept of executable documents is attracting growing interest from both academics and publishers since it is a promising technology for the dissemination of scientific results. Provenance is a kind of metadata that provides a rich description of the derivation history of data products starting from their original sources. It has been used in many different e-Science domains and has shown great potential in enabling reproducibility of scientific results. However, while both executable documents and provenance are aimed at enhancing the dissemination of scientific results, little has been done to explore the integration of both techniques. In this paper, we introduce the design and development of DEEP, an executable document environment that generates scientific results dynamically and interactively, and also records the provenance for these results in the document. In this system, provenance is exposed to users via an interface that provides them with an alternative way of navigating the executable document. In addition, we make use of the provenance to offer a document rollback facility to users and help to manage the system's dynamic resources

    Statistical Quality Control for Human-Based Electronic Services

    Get PDF

    Department of Homeland Security Science and Technology Directorate: Developing Technology to Protect America

    Get PDF
    In response to a congressional mandate and in consultation with Department of Homeland Security's (DHS) Science and Technology Directorate (S&T), the National Academy conducted a review of S&T's effectiveness and efficiency in addressing homeland security needs. This review included a particular focus that identified any unnecessary duplication of effort, and opportunity costs arising from an emphasis on homeland security-related research. Under the direction of the National Academy Panel, the study team reviewed a wide variety of documents related to S&T and homeland security-related research in general. The team also conducted interviews with more than 200 individuals, including S&T officials and staff, officials from other DHS component agencies, other federal agencies engaged in homeland security-related research, and experts from outside government in science policy, homeland security-related research and other scientific fields.Key FindingsThe results of this effort indicated that S&T faces a significant challenge in marshaling the resources of multiple federal agencies to work together to develop a homeland security-related strategic plan for all agencies. Yet the importance of this role should not be underestimated. The very process of working across agencies to develop and align the federal homeland security research enterprise around a forward-focused plan is critical to ensuring that future efforts support a common vision and goals, and that the metrics by which to measure national progress, and make changes as needed, are in place

    Mixed-Methods in Information Systems Research: Status Quo, Core Concepts, and Future Research Implications

    Get PDF
    Mixed-methods studies are increasing in information systems research, as they deliver robust and insightful inferences combining qualitative and quantitative research. However, there is considerable divergence in conducting such studies and reporting their findings. Therefore, we aim (1) to evaluate how mixed-methods studies have developed in information systems research under the existence of heavily used guidelines and (2) to reflect on those observations in terms of potential for future research. During our review, we identified 52 mixed-methods papers and quantitatively elaborated on the adherence to the three core concepts of mixed-methods in terms of purpose, meta-inferences, and validation. Findings discover that only eight adhere to all three of them. We discuss the significance of our results for current and upcoming mixed-methods research and derive specific suggestions for authors. With our study, we contribute to mixed-methods research by showing how to leverage the insights from existing guidelines to strengthen future research and by contributing to the discussion of the legislation associated with research guidelines, in general, presenting the status quo in current literature

    Large Language Models in Mental Health Care: a Scoping Review

    Full text link
    Objective: The growing use of large language models (LLMs) stimulates a need for a comprehensive review of their applications and outcomes in mental health care contexts. This scoping review aims to critically analyze the existing development and applications of LLMs in mental health care, highlighting their successes and identifying their challenges and limitations in these specialized fields. Materials and Methods: A broad literature search was conducted in November 2023 using six databases (PubMed, Web of Science, Google Scholar, arXiv, medRxiv, and PsyArXiv) following the 2020 version of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 313 publications were initially identified, and after applying the study inclusion criteria, 34 publications were selected for the final review. Results: We identified diverse applications of LLMs in mental health care, including diagnosis, therapy, patient engagement enhancement, etc. Key challenges include data availability and reliability, nuanced handling of mental states, and effective evaluation methods. Despite successes in accuracy and accessibility improvement, gaps in clinical applicability and ethical considerations were evident, pointing to the need for robust data, standardized evaluations, and interdisciplinary collaboration. Conclusion: LLMs show promising potential in advancing mental health care, with applications in diagnostics, and patient support. Continued advancements depend on collaborative, multidisciplinary efforts focused on framework enhancement, rigorous dataset development, technological refinement, and ethical integration to ensure the effective and safe application of LLMs in mental health care
    corecore