115 research outputs found

    A data-driven analysis of workers' earnings on Amazon Mechanical Turk

    Get PDF
    A growing number of people are working as part of on-line crowd work. Crowd work is often thought to be low wage work. However, we know little about the wage distribution in practice and what causes low/high earnings in this setting. We recorded 2,676 workers performing 3.8 million tasks on Amazon Mechanical Turk. Our task-level analysis revealed that workers earned a median hourly wage of only ~2 USD/h, and only 4% earned more than 7.25 USD/h. While the average requester pays more than 11 USD/h, lower-paying requesters post much more work. Our wage calculations are influenced by how unpaid work is accounted for, e.g., time spent searching for tasks, working on tasks that are rejected, and working on tasks that are ultimately not submitted. We further explore the characteristics of tasks and working patterns that yield higher hourly wages. Our analysis informs platform design and worker tools to create a more positive future for crowd work

    Classifying humans: the indirect reverse operativity of machine vision

    Get PDF
    Classifying is human. Classifying is also what machine vision technologies do. This article analyses the cybernetic loop between human and machine classification by examining artworks that depict instances of bias when machine vision is classifying humans and when humans classify visual datasets for machines. I propose the term ‘indirect reverse operativity’ – a concept built upon Ingrid Hoelzl’s and Remi Marie’s notion of ‘reverse operativity’ – to describe how classifying humans and machine classifiers operate in cybernetic information loops. Indirect reverse operativity is illustrated through two projects I have co-created: the Database of Machine Vision in Art, Games and Narrative and the artwork Suspicious Behavior. Through ‘artistic audits’ of selected artworks, a data analysis of how classification is represented in 500 creative works, and a reflection on my own artistic research in the Suspicious Behavior project, this article confronts and complicates assumptions of when and how bias is introduced into and propagates through machine vision classifiers. By examining cultural conceptions of machine vision bias which exemplify how humans operate machines and how machines operate humans through images, this article contributes fresh perspectives to the emerging field of critical dataset studies.publishedVersio

    PRIVACY CALCULUS OF PROVIDERS ON PEER-TO-PEER PLATFORMS: THE EFFECT OF MEDIA RICHNESS ON INFORMATION DISCLOSURE WHEN ADVERTISING ONESELF

    Get PDF
    In today’s e-commerce landscape peer-to-peer (P2P) platforms are shaping economic and social interactions. They provide challenges and opportunities for users, who can be consumers and providers at the same time. Transactions on P2P platforms (offering e.g., services, accommodation, or a ride) vary in multiple ways from the ones in conventional P2P e-commerce (offering e.g., products on eBay). Since private individuals are the providers on P2P platforms, it needs to be considered how they balance their preferences for privacy against expected benefits (privacy calculus) when advertising themselves. We conduct online experiments to look at how the intention to disclose information is affected using different media formats (text, voice, image, video) with varying richness of possible informational cues (e.g., accents, facial expressions etc.). We find that media richness, perceived usefulness for self and expected usefulness for others affect information sharing from a provider’s perspective

    Deception about study purpose does not affect participant behavior

    Get PDF
    The use of deception in research is divisive along disciplinary lines. Whereas psychologists argue that deception may be necessary to obtain unbiased measures, economists hold that deception can generate suspicion of researchers, invalidating measures and ‘poisoning’ the participant pool for others. However, experimental studies on the efects of deception, notably false-purpose deception— the most common form of experimental deception—are scarce. Challenges with participant attrition and avoiding confounds with a form of deception in which two related studies are presented as unrelated likely explain this scarcity. Here, we avoid these issues, testing within an experiment to what extent false-purpose deception afects honesty. We deploy two commonly used incentivized measures of honesty and unethical behavior: coin-fip and die-roll tasks. Across two pre-registered studies with over 2000 crowdsourced participants, we found that false-purpose deception did not afect honesty in either task, even when we deliberately provoked suspicion of deception. Past experience of deception also had no bearing on honesty. However, incentivized measures of norms indicated that many participants had reservations about researcher use of false-purpose deception in general—often considered the least concerning form of deception. Together, these fndings suggest that while false-purpose deception is not fundamentally problematic in the context of measuring honesty, it should only be used as a method of last resort. Our results motivate further experimental research to study the causal efects of other forms of deception, and other potential spillovers

    Prisoners training AI : Ghosts, Humans and Values in Data Labour

    Get PDF
    Despite accounts of how artificial intelligence (AI) is replacing human labour, constant efforts are needed to keep AI-based automation running. In this chapter, we are particularly interested in data work that supports processes of automation. We explore an unconventional arrangement in which Finnish prisoners annotate text to produce training data for a local AI firm. The use of prison labour to train AI invites straightforward conclusions of exploitation of the marginalised. On a closer look, however, attention is drawn to local and situational variations of data labour: how high-tech development can be married with humane penal policies and low-cost labour with rehabilitative aspirations. We argue for an approach that can hold together seemingly contradictory value aims and open novel ways of exploring processes of automated decision-making. By acknowledging what is of value to the different parties involved, we can begin to see alternative paths forward in the study of automation.Peer reviewe

    TurkScanner: Predicting the Hourly Wage of Microtasks

    Full text link
    Workers in crowd markets struggle to earn a living. One reason for this is that it is difficult for workers to accurately gauge the hourly wages of microtasks, and they consequently end up performing labor with little pay. In general, workers are provided with little information about tasks, and are left to rely on noisy signals, such as textual description of the task or rating of the requester. This study explores various computational methods for predicting the working times (and thus hourly wages) required for tasks based on data collected from other workers completing crowd work. We provide the following contributions. (i) A data collection method for gathering real-world training data on crowd-work tasks and the times required for workers to complete them; (ii) TurkScanner: a machine learning approach that predicts the necessary working time to complete a task (and can thus implicitly provide the expected hourly wage). We collected 9,155 data records using a web browser extension installed by 84 Amazon Mechanical Turk workers, and explored the challenge of accurately recording working times both automatically and by asking workers. TurkScanner was created using ~150 derived features, and was able to predict the hourly wages of 69.6% of all the tested microtasks within a 75% error. Directions for future research include observing the effects of tools on people's working practices, adapting this approach to a requester tool for better price setting, and predicting other elements of work (e.g., the acceptance likelihood and worker task preferences.)Comment: Proceedings of the 28th International Conference on World Wide Web (WWW '19), San Francisco, CA, USA, May 13-17, 201

    Richness of IT Use Operationalization: A Conceptual Replication

    Get PDF
    Use of information technology (IT) remains a key concern for organizations. This article presents a conceptual replication of Burton-Jones and Straub’s (2006) study, exploring the effect of IT Use operationalization richness – lean and rich – on Performance. We used 352 valid responses from Amazon MTurk through an online survey. Consistent with the original study, the hypothesis was tested by using the Structural Equation Modeling technique. Our results – which indicated support for the same hypothesis in the original study – suggest that the richer the IT use operationalization, the higher the individual Performance

    Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making

    Get PDF
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority? Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation
    • 

    corecore