5,804 research outputs found

    Crowdsourcing Accessibility: Human-Powered Access Technologies

    Get PDF
    People with disabilities have always engaged the people around them in order to circumvent inaccessible situations, allowing them to live more independently and get things done in their everyday lives. Increasing connectivity is allowing this approach to be extended to wherever and whenever it is needed. Technology can leverage this human work force to accomplish tasks beyond the capabilities of computers, increasing how accessible the world is for people with disabilities. This article outlines the growth of online human support, outlines a number of projects in this space, and presents a set of challenges and opportunities for this work going forward

    SurveyMan: Programming and Automatically Debugging Surveys

    Full text link
    Surveys can be viewed as programs, complete with logic, control flow, and bugs. Word choice or the order in which questions are asked can unintentionally bias responses. Vague, confusing, or intrusive questions can cause respondents to abandon a survey. Surveys can also have runtime errors: inattentive respondents can taint results. This effect is especially problematic when deploying surveys in uncontrolled settings, such as on the web or via crowdsourcing platforms. Because the results of surveys drive business decisions and inform scientific conclusions, it is crucial to make sure they are correct. We present SurveyMan, a system for designing, deploying, and automatically debugging surveys. Survey authors write their surveys in a lightweight domain-specific language aimed at end users. SurveyMan statically analyzes the survey to provide feedback to survey authors before deployment. It then compiles the survey into JavaScript and deploys it either to the web or a crowdsourcing platform. SurveyMan's dynamic analyses automatically find survey bugs, and control for the quality of responses. We evaluate SurveyMan's algorithms analytically and empirically, demonstrating its effectiveness with case studies of social science surveys conducted via Amazon's Mechanical Turk.Comment: Submitted version; accepted to OOPSLA 201

    Frontiers in Crowdsourced Data Integration

    Get PDF
    There is an ever-increasing amount and variety of open web data available that is insufficiently examined or not considered at all in decision making processes. This is because of the lack of end-user friendly tools that help to reuse this public data and to create knowledge out of it. Therefore, we propose a schema-optional data repository that provides the flexibility necessary to store and gradually integrate heterogeneous web data. Based on this repository, we propose a semi-automatic schema enrichment approach that efficiently augments the data in a “pay-as-you-go” fashion. Due to the inherently appearing ambiguities we further propose a crowd-based verification component that is able to resolve such conflicts in a scalable manner.Die stetig wachsende Zahl offen verfĂŒgbarer Webdaten findet momentan viel zu wenig oder gar keine BerĂŒcksichtigung in Entscheidungsprozessen. Der Grund hierfĂŒr ist insbesondere in der mangelnden UnterstĂŒtzung durch anwenderfreundliche Werkzeuge zu finden, die diese Daten nutzbar machen und Wissen daraus genieren können. Zu diesem Zweck schlagen wir ein schemaoptionales Datenrepositorium vor, welches ermöglicht, heterogene Webdaten zu speichern sowie kontinuierlich zu integrieren und mit Schemainformation anzureichern. Auf Grund der dabei inhĂ€rent auftretenden Mehrdeutigkeiten, soll dieser Prozess zusĂ€tzlich um eine Crowd-basierende Verifikationskomponente unterstĂŒtzt werden

    ENHANCING USERS’ EXPERIENCE WITH SMART MOBILE TECHNOLOGY

    Get PDF
    The aim of this thesis is to investigate mobile guides for use with smartphones. Mobile guides have been successfully used to provide information, personalisation and navigation for the user. The researcher also wanted to ascertain how and in what ways mobile guides can enhance users' experience. This research involved designing and developing web based applications to run on smartphones. Four studies were conducted, two of which involved testing of the particular application. The applications tested were a museum mobile guide application and a university mobile guide mapping application. Initial testing examined the prototype work for the ‘Chronology of His Majesty Sultan Haji Hassanal Bolkiah’ application. The results were used to assess the potential of using similar mobile guides in Brunei Darussalam’s museums. The second study involved testing of the ‘Kent LiveMap’ application for use at the University of Kent. Students at the university tested this mapping application, which uses crowdsourcing of information to provide live data. The results were promising and indicate that users' experience was enhanced when using the application. Overall results from testing and using the two applications that were developed as part of this thesis show that mobile guides have the potential to be implemented in Brunei Darussalam’s museums and on campus at the University of Kent. However, modifications to both applications are required to fulfil their potential and take them beyond the prototype stage in order to be fully functioning and commercially viable

    KNOWLEDGE STOCK EXCHANGES: A CO-OPETITIVE CROWDSOURCING MECHANISM FOR E-LEARNING

    Get PDF
    Modern information and communication technologies (ICT) provide numerous opportunities to support e-learning in higher education. Recent devlopments such as Massive Open Online Courses (MOOCs) utilize the scalabiltiy and interactivity of the ICT to broaden the accessibility of university education. However, the potential of ICT in enhancing studentsÂŽ learning experience and success is far from being fully utilized. One potential area for the development of new e-learning mechanisms is at the intersection of collective intelligence and crowdsourcing mechanisms: The knowledge-disseminating ability of a collective intelligence platform combined with the interactivity and participative nature of crowdsourcing knowledge from fellow students may enhance motiviation and acceptance of studentsÂŽ learning. Following a crowd-based approach we present a prototype that offers a highly collaborative and competitive learning environment to improve the mutual exchange of knowledge as well as to encourage the development of a knowledge community. Our approach draws upon the principle of virtual stock markets (also prediction markets ), a well-known collective intelligence mechanism which we enhanced with crowdsourcing elements. We describe the proposed system architecture, evaluate the practical feasibility of our prototype in the field and provide implications for future research

    Minimizing efforts in validating crowd answers

    Get PDF
    In recent years, crowdsourcing has become essential in a wide range of Web applications. One of the biggest challenges of crowdsourcing is the quality of crowd answers as workers have wide-ranging levels of expertise and the worker community may contain faulty workers. Although various techniques for quality control have been proposed, a post-processing phase in which crowd answers are validated is still required. Validation is typically conducted by experts, whose availability is limited and who incur high costs. Therefore, we develop a probabilistic model that helps to identify the most beneficial validation questions in terms of both, improvement of result correctness and detection of faulty workers. Our approach allows us to guide the experts work by collecting input on the most problematic cases, thereby achieving a set of high quality answers even if the expert does not validate the complete answer set. Our comprehensive evaluation using both real-world and synthetic datasets demonstrates that our techniques save up to 50% of expert efforts compared to baseline methods when striving for perfect result correctness. In absolute terms, for most cases, we achieve close to perfect correctness after expert input has been sought for only 20% of the questions

    A Study of Ethics in Crowd Work-Based Research

    Get PDF
    Crowd work as a form of a social-technical system has become a popular setting for conducting and distributing academic research. Crowd work platforms such as Amazon Mechanical Turk (MTurk) are widely used by academic researchers. Recent scholarship has highlighted the importance of ethical issues because they could affect the long-term development and application of crowd work in various fields such as the gig economy. However, little study or deliberation has been conducted on the ethical issues associated with academic research in this context. Current sources for ethical research practice, such as the Belmont Report, have not been examined thoroughly on how they should be applied to tackle the ethical issues in crowd work-based research such as those in data collection and usage. Hence, how crowd work-based research should be conducted to make it respectful, beneficent, and just is still an open question. This dissertation research has pursued this open question by interviewing 15 academic researchers and 17 IRB directors and analysts in terms of their perceptions and reflections on ethics in research on MTurk; meanwhile, it has analyzed 15 research guidelines and consent templates for research on MTurk and 14 published papers from the interviewed scholars. Based on analyzing these different sources of data, this dissertation research has identified three dimensions of ethics in crowd work-based research, including ethical issues in payment, data, and human subjects. This dissertation research also uncovered the “original sin” of these ethical issues and discussed its impact in academia, as well as the limitations of the Belmont Report and AoIR Ethical Guidelines 3.0 for Internet Research. The findings and implications of this research can help researchers and IRBs be more conscious about ethics in crowd work-based research and also inspire academic associations such as AoIR to develop ethical guidelines that can address these ethical issues
    • 

    corecore