3,426 research outputs found

    Task Assignment with Autonomous and Controlled Agents

    Get PDF
    We analyse assignment problems in which not all agents are controlled by the central planner. The autonomous agents search for vacant tasks guided by their own preference orders defined over subsets of the available tasks. The goal of the central planner is to maximise the total value of the assignment, taking into account the behaviour of the uncontrolled agents. This setting can be found in numerous real-world situations, ranging from organisational economics to "crowdsourcing" and disaster response. We introduce the Disjunctively Constrained Knapsack Game and show that its unique Nash equilibrium reveals the optimal assignment for the controlled agents. This result allows us to find the solution of the problem using mathematical programming techniques.

    From Task Classification Towards Similarity Measures for Recommendation in Crowdsourcing Systems

    Full text link
    Task selection in micro-task markets can be supported by recommender systems to help individuals to find appropriate tasks. Previous work showed that for the selection process of a micro-task the semantic aspects, such as the required action and the comprehensibility, are rated more important than factual aspects, such as the payment or the required completion time. This work gives a foundation to create such similarity measures. Therefore, we show that an automatic classification based on task descriptions is possible. Additionally, we propose similarity measures to cluster micro-tasks according to semantic aspects.Comment: Work in Progress Paper at HCOMP 201

    New Forms of Employment

    Get PDF
    Societal and economic developments, such as the need for increased flexibility by both employers and workers, have resulted in the emergence of new forms of employment across Europe. These have transformed the traditional one-to-one relationship between employer and employee. They are also characterised by unconventional work patterns and places of work, or by the irregular provision of work. However, little is known about these ‘new forms of employment’, their distinctive features and the implications they have for working conditions and the labour market. To fill this knowledge gap, Eurofound conducted a Europe-wide mapping exercise to identify the emerging trends. This resulted in the categorisation of nine broad types of new employment forms. On the basis of this, the available literature and data were analysed; 66 case studies were also conducted and analysed to illustrate how these new employment forms operate in Member States and their effects on working conditions and the labour market

    Empirical Methodology for Crowdsourcing Ground Truth

    Full text link
    The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods for populating the Semantic Web. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, in many domains, such as event detection, there is ambiguity in the data, as well as a multitude of perspectives of the information examples. We present an empirically derived methodology for efficiently gathering of ground truth data in a diverse set of use cases covering a variety of domains and annotation tasks. Central to our approach is the use of CrowdTruth metrics that capture inter-annotator disagreement. We show that measuring disagreement is essential for acquiring a high quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with majority vote, over a set of diverse crowdsourcing tasks: Medical Relation Extraction, Twitter Event Identification, News Event Extraction and Sound Interpretation. We also show that an increased number of crowd workers leads to growth and stabilization in the quality of annotations, going against the usual practice of employing a small number of annotators.Comment: in publication at the Semantic Web Journa
    • …
    corecore