5,045 research outputs found

    A Novel Human Computation Game for Critique Aggregation

    Get PDF
    We present a human computation game based on the popular board game - Dixit. We ask the players not only for annotations, but for a direct critique of the result of an automated system.We present the results of the initial run of the game, in which the answers of 15 players were used to profile the mistakes of an aspect-based opinion mining system. We show that the gameplay allowed us to identify the major faults of the extracted opinions. The players' actions thus helped improve the opinion extraction algorithm

    Rethinking affordance

    Get PDF
    n/a – Critical survey essay retheorising the concept of 'affordance' in digital media context. Lead article in a special issue on the topic, co-edited by the authors for the journal Media Theory

    Behavioural Economics: Classical and Modern

    Get PDF
    In this paper, the origins and development of behavioural economics, beginning with the pioneering works of Herbert Simon (1953) and Ward Edwards (1954), is traced, described and (critically) discussed, in some detail. Two kinds of behavioural economics – classical and modern – are attributed, respectively, to the two pioneers. The mathematical foundations of classical behavioural economics is identified, largely, to be in the theory of computation and computational complexity; the corresponding mathematical basis for modern behavioural economics is, on the other hand, claimed to be a notion of subjective probability (at least at its origins in the works of Ward Edwards). The economic theories of behavior, challenging various aspects of 'orthodox' theory, were decisively influenced by these two mathematical underpinnings of the two theoriesClassical Behavioural Economics, Modern Behavioural Economics, Subjective Probability, Model of Computation, Computational Complexity. Subjective Expected Utility

    Computability and Evolutionary Complexity: Markets As Complex Adaptive Systems (CAS)

    Get PDF
    The purpose of this Feature is to critically examine and to contribute to the burgeoning multi disciplinary literature on markets as complex adaptive systems (CAS). Three economists, Robert Axtell, Steven Durlauf and Arthur Robson who have distinguished themselves as pioneers in different aspects of how the thesis of evolutionary complexity pertains to market environments have contributed to this special issue. Axtell is concerned about the procedural aspects of attaining market equilibria in a decentralized setting and argues that principles on the complexity of feasible computation should rule in or out widely held models such as the Walrasian one. Robson puts forward the hypothesis called the Red Queen principle, well known from evolutionary biology, as a possible explanation for the evolution of complexity itself. Durlauf examines some of the claims that have been made in the name of complex systems theory to see whether these present testable hypothesis for economic models. My overview aims to use the wider literature on complex systems to provide a conceptual framework within which to discuss the issues raised for Economics in the above contributions and elsewhere. In particular, some assessment will be made on the extent to which modern complex systems theory and its application to markets as CAS constitutes a paradigm shift from more mainstream economic analysis

    Introduction to social choice and welfare

    Get PDF
    Social choice theory is concerned with the evaluation of alternative methods of collective decision-making, as well as with the logical foundations of welfare economics. In turn, welfare economics is concerned with the critical scrutiny of the performance of actual and/or imaginary economic systems, as well as with the critique, design and implementation of alternative economic policies. The Handbook of Social Choice and Welfare, which is edited by Kenneth Arrow, Amartya Sen and Kotaro Suzumura, presents, in two volumes, essays on past and on-going work in social choice theory and welfare economics. This paper is written as an extensive introduction to the Handbook with the purpose of placing the broad issues examined in the two volumes in better perspective, discussing the historical background of social choice theory, the vistas opened by Arrow's Social Choice and Individual Values, the famous "socialist planning" controversy, and the theoretical and practical significance of social choice theory.social choice theory, welfare economics, socialist planning controversy, social welfare function, Arrovian impossibility theorems, voting schemes, implementation theory, equity and justice, welfare and rights, functioning and capability, procedural fairness

    A systemic framework for the computational analysis of complex economies: An evolutionary-institutional perspective on the ontology, epistemology, and methodology of complexity economics

    Get PDF
    This theses introduces the idea of a symbiotic relationship between evolutionary-institutional and complexity economics. It consists of two major contributions: The first contribution focuses on how the emerging research program of complexity economics can benefit from evolutionary-institutional theory. I show that complexity economics still lacks an adequate philosophical foundation. I explicate why such a foundation is needed if complexity economics is to promote further scientific progress and that such a foundation must consist of an adequate ontology, epistemology, and methodology. The following parts of the theses then draw upon institutionalist and social theory to develop these three aspects: I derive a definition of complex economic systems by identifying their essential properties. I then propose an epistemology that is based on the concepts of mechanism-based explanation, generative sufficiency, and an extended version of Uskali Mäki's concept of â Models as Isolations and Surrogate Systemsâ . I continue with some methodological considerations and argue that the method of 'Agent based computational economic modeling' must play a distinctive role for the analysis of complex economies. The second contribution of the theses shows how evolutionary-institutionalism can profit from a methodological transfer from complexity economics. In particular I argue that the method of 'Agent based computational modeling' can advance institutionalism both as a formalization device and by providing theoretical concepts that are useful for institutionalist theorizing itself. The theses closes by discussing a potential convergence of evolutionary-institutional and complexity economics and gives an outlook on avenues for further research

    Spam elimination and bias correction : ensuring label quality in crowdsourced tasks.

    Get PDF
    Crowdsourcing is proposed as a powerful mechanism for accomplishing large scale tasks via anonymous workers online. It has been demonstrated as an effective and important approach for collecting labeled data in application domains which require human intelligence, such as image labeling, video annotation, natural language processing, etc. Despite the promises, one big challenge still exists in crowdsourcing systems: the difficulty of controlling the quality of crowds. The workers usually have diverse education levels, personal preferences, and motivations, leading to unknown work performance while completing a crowdsourced task. Among them, some are reliable, and some might provide noisy feedback. It is intrinsic to apply worker filtering approach to crowdsourcing applications, which recognizes and tackles noisy workers, in order to obtain high-quality labels. The presented work in this dissertation provides discussions in this area of research, and proposes efficient probabilistic based worker filtering models to distinguish varied types of poor quality workers. Most of the existing work in literature in the field of worker filtering either only concentrates on binary labeling tasks, or fails to separate the low quality workers whose label errors can be corrected from the other spam workers (with label errors which cannot be corrected). As such, we first propose a Spam Removing and De-biasing Framework (SRDF), to deal with the worker filtering procedure in labeling tasks with numerical label scales. The developed framework can detect spam workers and biased workers separately. The biased workers are defined as those who show tendencies of providing higher (or lower) labels than truths, and their errors are able to be corrected. To tackle the biasing problem, an iterative bias detection approach is introduced to recognize the biased workers. The spam filtering algorithm proposes to eliminate three types of spam workers, including random spammers who provide random labels, uniform spammers who give same labels for most of the items, and sloppy workers who offer low accuracy labels. Integrating the spam filtering and bias detection approaches into aggregating algorithms, which infer truths from labels obtained from crowds, can lead to high quality consensus results. The common characteristic of random spammers and uniform spammers is that they provide useless feedback without making efforts for a labeling task. Thus, it is not necessary to distinguish them separately. In addition, the removal of sloppy workers has great impact on the detection of biased workers, with the SRDF framework. To combat these problems, a different way of worker classification is presented in this dissertation. In particular, the biased workers are classified as a subcategory of sloppy workers. Finally, an ITerative Self Correcting - Truth Discovery (ITSC-TD) framework is then proposed, which can reliably recognize biased workers in ordinal labeling tasks, based on a probabilistic based bias detection model. ITSC-TD estimates true labels through applying an optimization based truth discovery method, which minimizes overall label errors by assigning different weights to workers. The typical tasks posted on popular crowdsourcing platforms, such as MTurk, are simple tasks, which are low in complexity, independent, and require little time to complete. Complex tasks, however, in many cases require the crowd workers to possess specialized skills in task domains. As a result, this type of task is more inclined to have the problem of poor quality of feedback from crowds, compared to simple tasks. As such, we propose a multiple views approach, for the purpose of obtaining high quality consensus labels in complex labeling tasks. In this approach, each view is defined as a labeling critique or rubric, which aims to guide the workers to become aware of the desirable work characteristics or goals. Combining the view labels results in the overall estimated labels for each item. The multiple views approach is developed under the hypothesis that workers\u27 performance might differ from one view to another. Varied weights are then assigned to different views for each worker. Additionally, the ITSC-TD framework is integrated into the multiple views model to achieve high quality estimated truths for each view. Next, we propose a Semi-supervised Worker Filtering (SWF) model to eliminate spam workers, who assign random labels for each item. The SWF approach conducts worker filtering with a limited set of gold truths available as priori. Each worker is associated with a spammer score, which is estimated via the developed semi-supervised model, and low quality workers are efficiently detected by comparing the spammer score with a predefined threshold value. The efficiency of all the developed frameworks and models are demonstrated on simulated and real-world data sets. By comparing the proposed frameworks to a set of state-of-art methodologies, such as expectation maximization based aggregating algorithm, GLAD and optimization based truth discovery approach, in the domain of crowdsourcing, up to 28.0% improvement can be obtained for the accuracy of true label estimation
    • …
    corecore