20 research outputs found

    A Model-Driven Approach for Crowdsourcing Search

    Get PDF
    Even though search systems are very ecient in retrieving world-wide information, they can not capture some peculiar aspects and features of user needs, such as subjective opin- ions and recommendations, or information that require local or domain specic expertise. In this kind of scenario, the hu- man opinion provided by an expert or knowledgeable user can be more useful than any factual information retrieved by a search engine. In this paper we propose a model-driven approach for the specication of crowd-search tasks, i.e. activities where real people { in real time { take part to the generalized search process that involve search engines. In particular we dene two models: the\Query TaskModel", representing the meta- model of the query that is submitted to the crowd and the associated answers; and the \User Interaction Model", which shows how the user can interact with the query model to fulll her needs. Our solution allows for a top-down design approach, from the crowd-search task design, down to the crowd answering system design. Our approach also grants automatic code generation thus leading to quick prototyping of search applications based on human responses collected over social networking or crowdsourcing platforms

    Leveraging Crowdsourcing Data For Deep Active Learning - An Application: Learning Intents in Alexa

    Full text link
    This paper presents a generic Bayesian framework that enables any deep learning model to actively learn from targeted crowds. Our framework inherits from recent advances in Bayesian deep learning, and extends existing work by considering the targeted crowdsourcing approach, where multiple annotators with unknown expertise contribute an uncontrolled amount (often limited) of annotations. Our framework leverages the low-rank structure in annotations to learn individual annotator expertise, which then helps to infer the true labels from noisy and sparse annotations. It provides a unified Bayesian model to simultaneously infer the true labels and train the deep learning model in order to reach an optimal learning efficacy. Finally, our framework exploits the uncertainty of the deep learning model during prediction as well as the annotators' estimated expertise to minimize the number of required annotations and annotators for optimally training the deep learning model. We evaluate the effectiveness of our framework for intent classification in Alexa (Amazon's personal assistant), using both synthetic and real-world datasets. Experiments show that our framework can accurately learn annotator expertise, infer true labels, and effectively reduce the amount of annotations in model training as compared to state-of-the-art approaches. We further discuss the potential of our proposed framework in bridging machine learning and crowdsourcing towards improved human-in-the-loop systems

    NoXperanto: Crowdsourced Polyglot Persistence

    No full text
    This paper proposes NoXperanto , a novel crowdsourcing approach to address querying over data collections managed by polyglot persistence settings. The main contribution of NoXperanto is the ability to solve complex queries involving different data stores by exploiting queries from expert users (i.e. a crowd of database administrators, data engineers, domain experts, etc.), assuming that these users can submit meaningful queries. NoXperanto exploits the results of meaningful queries in order to facilitate the forthcoming query answering processes. In particular, queries results are used to: (i) help non-expert users in using the multi- database environment and (ii) improve performances of the multi-database environment, which not only uses disk and memory resources, but heavily rely on network bandwidth. NoXperanto employs a layer to keep track of the information produced by the crowd modeled as a Property Graph and managed in a Graph Database Management System (GDBMS)

    Augmenting the performance of image similarity search through crowdsourcing

    Get PDF
    Crowdsourcing is defined as “outsourcing a task that is traditionally performed by an employee to a large group of people in the form of an open call” (Howe 2006). Many platforms designed to perform several types of crowdsourcing and studies have shown that results produced by crowds in crowdsourcing platforms are generally accurate and reliable. Crowdsourcing can provide a fast and efficient way to use the power of human computation to solve problems that are difficult for machines to perform. From several different microtasking crowdsourcing platforms available, we decided to perform our study using Amazon Mechanical Turk. In the context of our research we studied the effect of user interface design and its corresponding cognitive load on the performance of crowd-produced results. Our results highlighted the importance of a well-designed user interface on crowdsourcing performance. Using crowdsourcing platforms such as Amazon Mechanical Turk, we can utilize humans to solve problems that are difficult for computers, such as image similarity search. However, in tasks like image similarity search, it is more efficient to design a hybrid human–machine system. In the context of our research, we studied the effect of involving the crowd on the performance of an image similarity search system and proposed a hybrid human–machine image similarity search system. Our proposed system uses machine power to perform heavy computations and to search for similar images within the image dataset and uses crowdsourcing to refine results. We designed our content-based image retrieval (CBIR) system using SIFT, SURF, SURF128 and ORB feature detector/descriptors and compared the performance of the system using each feature detector/descriptor. Our experiment confirmed that crowdsourcing can dramatically improve the CBIR system performance

    Crowd-Computer Interaction, a Topic in Need of a Model

    Get PDF
    Abstract. Crowd-Computer Interaction -CCI -is a form of human-computer interaction -HCI -in which single actions from many individuals are aggregated to produce a different result that would not be achievable otherwise for one individual alone. As a research topic several questions remain open regarding CCI, for example, to what extent the principles and heuristics of interactions design under the paradigm of one-user-one-interface are applicable to crowds interacting with a network of interfaces? If a system is usable for individuals, will it be usable for crowds? Should designs be centered on the individual or on the crowd? A model of how crowds interact with computers is needed to start finding answers, that need is discussed in this paper along with some research proposals to develop that model

    BPMN task instance streaming for efficient micro-task crowdsourcing processes

    Get PDF
    The Business Process Model and Notation (BPMN) is a standard for modeling and executing business processes with human or machine tasks. The semantics of tasks is usually discrete: a task has exactly one start event and one end event; for multi-instance tasks, all instances must complete before an end event is emitted. We propose a new task type and streaming connector for crowdsourcing able to run hundreds or thousands of micro-task instances in parallel. The two constructs provide for task streaming semantics that is new to BPMN, enable the modeling and efficient enactment of complex crowdsourcing scenarios, and are applicable also beyond the special case of crowdsourcing. We implement the necessary design and runtime support on top of Crowd- Flower, demonstrate the viability of the approach via a case study, and report on a set of runtime performance experiments
    corecore