39,126 research outputs found

    The Disparate Effects of Strategic Manipulation

    Full text link
    When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval. Models of agent responsiveness, termed "strategic manipulation," analyze the interaction between a learner and agents in a world where all agents are equally able to manipulate their features in an attempt to "trick" a published classifier. In cases of real world classification, however, an agent's ability to adapt to an algorithm is not simply a function of her personal interest in receiving a positive classification, but is bound up in a complex web of social factors that affect her ability to pursue certain action responses. In this paper, we adapt models of strategic manipulation to capture dynamics that may arise in a setting of social inequality wherein candidate groups face different costs to manipulation. We find that whenever one group's costs are higher than the other's, the learner's equilibrium strategy exhibits an inequality-reinforcing phenomenon wherein the learner erroneously admits some members of the advantaged group, while erroneously excluding some members of the disadvantaged group. We also consider the effects of interventions in which a learner subsidizes members of the disadvantaged group, lowering their costs in order to improve her own classification performance. Here we encounter a paradoxical result: there exist cases in which providing a subsidy improves only the learner's utility while actually making both candidate groups worse-off--even the group receiving the subsidy. Our results reveal the potentially adverse social ramifications of deploying tools that attempt to evaluate an individual's "quality" when agents' capacities to adaptively respond differ.Comment: 29 pages, 4 figure

    The Role of Randomness and Noise in Strategic Classification

    Get PDF
    We investigate the problem of designing optimal classifiers in the strategic classification setting, where the classification is part of a game in which players can modify their features to attain a favorable classification outcome (while incurring some cost). Previously, the problem has been considered from a learning-theoretic perspective and from the algorithmic fairness perspective. Our main contributions include 1. Showing that if the objective is to maximize the efficiency of the classification process (defined as the accuracy of the outcome minus the sunk cost of the qualified players manipulating their features to gain a better outcome), then using randomized classifiers (that is, ones where the probability of a given feature vector to be accepted by the classifier is strictly between 0 and 1) is necessary. 2. Showing that in many natural cases, the imposed optimal solution (in terms of efficiency) has the structure where players never change their feature vectors (the randomized classifier is structured in a way, such that the gain in the probability of being classified as a 1 does not justify the expense of changing one's features). 3. Observing that the randomized classification is not a stable best-response from the classifier's viewpoint, and that the classifier doesn't benefit from randomized classifiers without creating instability in the system. 4. Showing that in some cases, a noisier signal leads to better equilibria outcomes -- improving both accuracy and fairness when more than one subpopulation with different feature adjustment costs are involved. This is interesting from a policy perspective, since it is hard to force institutions to stick to a particular randomized classification strategy (especially in a context of a market with multiple classifiers), but it is possible to alter the information environment to make the feature signals inherently noisier.Comment: 22 pages. Appeared in FORC, 202

    Reconsidering Public Relations’ Infatuation with Dialogue: Why Engagement and Reconciliation Can Be More Ethical Than Symmetry and Reciprocity

    Get PDF
    Advocates of dialogic communication have promoted two-way symmetrical communication as the most effective and ethical model for public relations. This article uses John Durham Peters’s critique of dialogic communication to reconsider this infatuation with dialogue. In this article, we argue that dialogue’s potential for selectivity and tyranny poses moral problems for public relations. Dialogue’s emphasis on reciprocal communication also saddles public relations with ethically questionable quid pro quo relationships. We contend that dissemination can be more just than dialogue because it demands more integrity of the source and recognizes the freedom and individuality of the source. The type of communication, such as dialogue or dissemination, is less important than the mutual discovery of truth. Reconciliation, a new model of public relations, is proposed as an alternative to pure dialogue. Reconciliation recognizes and values individuality and differences, and integrity is no longer sacrificed at the altar of agreement

    ILR Research in Progress 2003-04

    Get PDF
    The production of scholarly research continues to be one of the primary missions of the ILR School. During a typical academic year, ILR faculty members published or had accepted for publication over 25 books, edited volumes, and monographs, 170 articles and chapters in edited volumes, numerous book reviews. In addition, a large number of manuscripts were submitted for publication, presented at professional association meetings, or circulated in working paper form. Our faculty's research continues to find its way into the very best industrial relations, social science and statistics journals.Research_in_Progress_2003_04.pdf: 19 downloads, before Oct. 1, 2020

    A review of data visualization: opportunities in manufacturing sequence management.

    No full text
    Data visualization now benefits from developments in technologies that offer innovative ways of presenting complex data. Potentially these have widespread application in communicating the complex information domains typical of manufacturing sequence management environments for global enterprises. In this paper the authors review the visualization functionalities, techniques and applications reported in literature, map these to manufacturing sequence information presentation requirements and identify the opportunities available and likely development paths. Current leading-edge practice in dynamic updating and communication with suppliers is not being exploited in manufacturing sequence management; it could provide significant benefits to manufacturing business. In the context of global manufacturing operations and broad-based user communities with differing needs served by common data sets, tool functionality is generally ahead of user application

    Chen Shui-bian: on independence

    No full text
    Chen Shui-bian achieved an international reputation for his promotion of Taiwan independence. Whilst that reputation may have been well earned, the analyses on which this conclusion is based are frequently flawed in two ways. First, by using an undifferentiated notion of independence, they tend to conflate sovereignty with less threatening expressions of Taiwanese identity and pro-democracy discourse. Second, by failing to take into account the impact of immediate strategic context, analysts ignore a fundamental element of democratic political communication. In our empirical analysis of more than 2,000 of Chen’s speeches, we seek to avoid both flaws by unpacking the concept of independence and taking into account Chen’s strategic relationship with his primary audiences. Our findings challenge popular portrayals of Chen, but more importantly they have strong implications for policy makers and students of political rhetoric with regard to current and future ROC presidents

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself
    • …
    corecore