647 research outputs found

    Revealing the Voices of Resistance: A Q-Methodology Study on Platform Workers in the Gig Economy

    Get PDF
    While algorithmic management generates several benefits for platform companies, it emanates several issues for workers, which they perceive as threats triggering different forms of resistance behaviors. Although recent studies identify these issues and resistance behaviors, the perspective of the actual subject of resistance, i.e., the gig worker or group of gig workers with resistant behaviors, is yet not well understood. By adopting a Q-methodology mixed-method approach this study tries to identify resistance types of gig workers, explore their characteristics and similarities, and therefore give a voice to the subject of resistance. Based on 21 threats and 14 resistance behaviors, identified in a literature review, we develop a Q-set containing 35 statements, which will be used for data collection with the goal of revealing the richness of the resistance phenomenon in the context of work in the gig economy

    From Apathy to Algoactivism: Worker Resistance to Algorithmic Control in Food Delivery Platforms

    Get PDF
    Platforms in the gig economy rely on algorithmic control to manage their workforce, but recent scientific evidence has shown that workers have begun to resist this control. Due to lacking focus and limited empirical data, the phenomenon of worker resistance to algorithmic control is still insufficiently understood. Based on a topic modeling approach with over 2 million text documents extracted from Reddit forums of different food-delivery platforms, we identify 14 resistance actions showing how food-delivery workers resist algorithmic control. Our study contributes to current research by expanding the understanding of resistance to algorithmic control in the gig economy, showing what resistant actions workers take, and discussing the concepts of individual opacity and collective knowledge as possible escalators and de-escalators of this resistance

    Algorithmic Transparency: Concepts, Antecedents, and Consequences – A Review and Research Framework

    Get PDF
    The widespread and growing use of algorithm-enabled technologies across many aspects of public and private life is increasingly sparking concerns about the lack of transparency regarding the inner workings of algorithms. This has led to calls for (more) algorithmic transparency (AT), which refers to the disclosure of information about algorithms to enable understanding, critical review, and adjustment. To set the stage for future research on AT, our study draws on previous work to provide a more nuanced conceptualization of AT, including the explicit distinction between AT as action and AT as perception. On this conceptual basis, we set forth to conduct a comprehensive and systematic review of the literature on AT antecedents and consequences. Subsequently, we develop an integrative framework to organize the existing literature and guide future work. Our framework consists of seven central relationships: (1) AT as action versus AT as perception; factors (2) triggering and (3) shaping AT as action; (4) factors shaping AT as perception; as well as AT as perception leading to (5) rational-cognitive and (6) affective-emotional responses, and to (7) (un-)intended behavioral effects. Building on the review insights, we identify and discuss notable research gaps and inconsistencies, along with resulting opportunities for future research

    A New Era of Control: Understanding Algorithmic Control in the Gig Economy

    Get PDF
    With the promise of autonomy, flexibility, and “being your own boss” the gig economy is growing to be one of the most important economic and social developments of our time. This growth is possible due to the platform’s reliance on algorithmic control, which comprises the use of algorithmic technologies to control and align workers\u27 behavior. Conducting a multiple-case study on the use of algorithmic control in two app-work platforms (Uber & Mjam) and two crowdwork platforms (Upwork & Freelancer) on the basis of established control concepts, we develop a holistic understanding of algorithmic control and show how platforms realize this new form of control along three dimensions: control allocation, control formalization, and control adaptiveness. We contribute also by introducing the concepts of control artifacts and internalized control as a step forward in explaining algorithmic control phenomena

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Technology and democracy: a paradox wrapped in a contradiction inside an irony

    Get PDF
    Democracy is in retreat around the globe. Many commentators have blamed the Internet for this development, whereas others have celebrated the Internet as a tool for liberation, with each opinion being buttressed by supporting evidence. We try to resolve this paradox by reviewing some of the pressure points that arise between human cognition and the online information architecture, and their fallout for the well-being of democracy. We focus on the role of the attention economy, which has monetised dwell time on platforms, and the role of algorithms that satisfy users’ presumed preferences. We further note the inherent asymmetry in power between platforms and users that arises from these pressure points, and we conclude by sketching out the principles of a new Internet with democratic credentials
    • 

    corecore