86 research outputs found

    Screening with Disadvantaged Agents

    Get PDF

    Setting Fair Incentives to Maximize Improvement

    Get PDF
    We consider the problem of helping agents improve by setting goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach (or do nothing if no target level is within reach). We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and we consider fairness objectives when the agents belong to different underlying populations. We prove algorithmic, learning, and structural results for each model. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement; agents who previously tried hard to reach a distant target now have a closer target to reach and hence improve less. This especially presents a challenge when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, despite the non-monotonicity property and interference of the target levels, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we illustrate counterexamples of this result in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents

    Screening with Disadvantaged Agents

    Full text link
    Motivated by school admissions, this paper studies screening in a population with both advantaged and disadvantaged agents. A school is interested in admitting the most skilled students, but relies on imperfect test scores that reflect both skill and effort. Students are limited by a budget on effort, with disadvantaged students having tighter budgets. This raises a challenge for the principal: among agents with similar test scores, it is difficult to distinguish between students with high skills and students with large budgets. Our main result is an optimal stochastic mechanism that maximizes the gains achieved from admitting ``high-skill" students minus the costs incurred from admitting ``low-skill" students when considering two skill types and nn budget types. Our mechanism makes it possible to give higher probability of admission to a high-skill student than to a low-skill, even when the low-skill student can potentially get higher test-score due to a higher budget. Further, we extend our admission problem to a setting in which students uniformly receive an exogenous subsidy to increase their budget for effort. This extension can only help the school's admission objective and we show that the optimal mechanism with exogenous subsidies has the same characterization as optimal mechanisms for the original problem

    Operationalizing fairness for responsible machine learning

    Get PDF
    As machine learning (ML) is increasingly used for decision making in scenarios that impact humans, there is a growing awareness of its potential for unfairness. A large body of recent work has focused on proposing formal notions of fairness in ML, as well as approaches to mitigate unfairness. However, there is a growing disconnect between the ML fairness literature and the needs to operationalize fairness in practice. This thesis addresses the need for responsible ML by developing new models and methods to address challenges in operationalizing fairness in practice. Specifically, it makes the following contributions. First, we tackle a key assumption in the group fairness literature that sensitive demographic attributes such as race and gender are known upfront, and can be readily used in model training to mitigate unfairness. In practice, factors like privacy and regulation often prohibit ML models from collecting or using protected attributes in decision making. To address this challenge we introduce the novel notion of computationally-identifiable errors and propose Adversarially Reweighted Learning (ARL), an optimization method that seeks to improve the worst-case performance over unobserved groups, without requiring access to the protected attributes in the dataset. Second, we argue that while group fairness notions are a desirable fairness criterion, they are fundamentally limited as they reduce fairness to an average statistic over pre-identified protected groups. In practice, automated decisions are made at an individual level, and can adversely impact individual people irrespective of the group statistic. We advance the paradigm of individual fairness by proposing iFair (individually fair representations), an optimization approach for learning a low dimensional latent representation of the data with two goals: to encode the data as well as possible, while removing any information about protected attributes in the transformed representation. Third, we advance the individual fairness paradigm, which requires that similar individuals receive similar outcomes. However, similarity metrics computed over observed feature space can be brittle, and inherently limited in their ability to accurately capture similarity between individuals. To address this, we introduce a novel notion of fairness graphs, wherein pairs of individuals can be identified as deemed similar with respect to the ML objective. We cast the problem of individual fairness into graph embedding, and propose PFR (pairwise fair representations), a method to learn a unified pairwise fair representation of the data. Fourth, we tackle the challenge that production data after model deployment is constantly evolving. As a consequence, in spite of the best efforts in training a fair model, ML systems can be prone to failure risks due to a variety of unforeseen reasons. To ensure responsible model deployment, potential failure risks need to be predicted, and mitigation actions need to be devised, for example, deferring to a human expert when uncertain or collecting additional data to address model’s blind-spots. We propose Risk Advisor, a model-agnostic meta-learner to predict potential failure risks and to give guidance on the sources of uncertainty inducing the risks, by leveraging information theoretic notions of aleatoric and epistemic uncertainty. This dissertation brings ML fairness closer to real-world applications by developing methods that address key practical challenges. Extensive experiments on a variety of real-world and synthetic datasets show that our proposed methods are viable in practice.Mit der zunehmenden Verwendung von Maschinellem Lernen (ML) in Situationen, die Auswirkungen auf Menschen haben, nimmt das Bewusstsein ĂŒber das Potenzial fĂŒr Unfair- ness zu. Ein großer Teil der jĂŒngeren Forschung hat den Fokus auf das formale VerstĂ€ndnis von Fairness im Zusammenhang mit ML sowie auf AnsĂ€tze zur Überwindung von Unfairness gelegt. Jedoch driften die Literatur zu Fairness in ML und die Anforderungen zur Implementierung in der Praxis zunehmend auseinander. Diese Arbeit beschĂ€ftigt sich mit der Notwendigkeit fĂŒr verantwortungsvolles ML, wofĂŒr neue Modelle und Methoden entwickelt werden, um die Herausforderungen im Fairness-Bereich in der Praxis zu bewĂ€ltigen. Ihr wissenschaftlicher Beitrag ist im Folgenden dargestellt. In Kapitel 3 behandeln wir die SchlĂŒsselprĂ€misse in der Gruppenfairnessliteratur, dass sensible demografische Merkmale wie etwa die ethnische Zugehörigkeit oder das Geschlecht im Vorhinein bekannt sind und wĂ€hrend des Trainings eines Modells zur Reduzierung der Unfairness genutzt werden können. In der Praxis hindern hĂ€ufig EinschrĂ€nkungen zum Schutz der PrivatsphĂ€re oder gesetzliche Regelungen ML-Modelle daran, geschĂŒtzte Merkmale fĂŒr die Entscheidungsfindung zu sammeln oder zu verwenden. Um diese Herausforderung zu ĂŒberwinden, fĂŒhren wir das Konzept der Komputational-identifizierbaren Fehler ein und stellen Adversarially Reweighted Learning (ARL) vor, ein Optimierungsverfahren, das die Worst-Case-Performance bei unbekannter Gruppenzugehörigkeit ohne Wissen ĂŒber die geschĂŒtzten Merkmale verbessert. In Kapitel 4 stellen wir dar, dass Konzepte fĂŒr Gruppenfairness trotz ihrer Eignung als Fairnesskriterium grundsĂ€tzlich beschrĂ€nkt sind, da Fairness auf eine gemittelte statistische GrĂ¶ĂŸe fĂŒr zuvor identifizierte geschĂŒtzte Gruppen reduziert wird. In der Praxis werden automatisierte Entscheidungen auf einer individuellen Ebene gefĂ€llt, und können unabhĂ€ngig von der gruppenbezogenen Statistik Nachteile fĂŒr Individuen haben. Wir erweitern das Konzept der individuellen Fairness um unsere Methode iFair (individually fair representations), ein Optimierungsverfahren zum Erlernen einer niedrigdimensionalen Darstellung der Daten mit zwei Zielen: die Daten so akkurat wie möglich zu enkodieren und gleichzeitig jegliche Information ĂŒber die geschĂŒtzten Merkmale in der transformierten Darstellung zu entfernen. In Kapitel 5 entwickeln wir das Paradigma der individuellen Fairness weiter, das ein Ă€hnliches Ergebnis fĂŒr Ă€hnliche Individuen erfordert. Ähnlichkeitsmetriken im beobachteten Featureraum können jedoch unzuverlĂ€ssig und inhĂ€rent beschrĂ€nkt darin sein, Ähnlichkeit zwischen Individuen korrekt abzubilden. Um diese Herausforderung anzugehen, fĂŒhren wir den neue Konzept der Fairnessgraphen ein, in denen Paare (oder Sets) von Individuen als Ă€hnlich im Bezug auf die ML-Aufgabe identifiziert werden. Wir ĂŒbersetzen das Problem der individuellen Fairness in eine Grapheinbindung und stellen PFR (pairwise fair representations) vor, eine Methode zum Erlernen einer vereinheitlichten paarweisen fairen Abbildung der Daten. In Kapitel 6 gehen wir die Herausforderung an, dass sich die Daten im Feld nach der Inbetriebnahme des Modells fortlaufend Ă€ndern. In der Konsequenz können ML-Systeme trotz grĂ¶ĂŸter BemĂŒhungen, ein faires Modell zu trainieren, aufgrund einer Vielzahl an unvorhergesehenen GrĂŒnden scheitern. Um eine verantwortungsvolle Implementierung sicherzustellen, gilt es, Risiken fĂŒr ein potenzielles Versagen vorherzusehen und Gegenmaßnahmen zu entwickeln,z.B. die Übertragung der Entscheidung an einen menschlichen Experten bei Unsicherheit oder das Sammeln weiterer Daten, um die blinden Flecken des Modells abzudecken. Wir stellen mit Risk Advisor einen modell-agnostischen Meta-Learner vor, der Risiken fĂŒr potenzielles Versagen vorhersagt und Anhaltspunkte fĂŒr die Ursache der zugrundeliegenden Unsicherheit basierend auf informationstheoretischen Konzepten der aleatorischen und epistemischen Unsicherheit liefert. Diese Dissertation bringt Fairness fĂŒr verantwortungsvolles ML durch die Entwicklung von AnsĂ€tzen fĂŒr die Lösung von praktischen Kernproblemen nĂ€her an die Anwendungen im Feld. Umfassende Experimente mit einer Vielzahl von synthetischen und realen DatensĂ€tzen zeigen, dass unsere AnsĂ€tze in der Praxis umsetzbar sind.The International Max Planck Research School for Computer Science (IMPRS-CS

    Cyber Threats and NATO 2030: Horizon Scanning and Analysis

    Get PDF
    The book includes 13 chapters that look ahead to how NATO can best address the cyber threats, as well as opportunities and challenges from emerging and disruptive technologies in the cyber domain over the next decade. The present volume addresses these conceptual and practical requirements and contributes constructively to the NATO 2030 discussions. The book is arranged in five short parts...All the chapters in this book have undergone double-blind peer review by at least two external experts.https://scholarworks.wm.edu/asbook/1038/thumbnail.jp

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Artificial Intelligence and International Conflict in Cyberspace

    Get PDF
    This edited volume explores how artificial intelligence (AI) is transforming international conflict in cyberspace. Over the past three decades, cyberspace developed into a crucial frontier and issue of international conflict. However, scholarly work on the relationship between AI and conflict in cyberspace has been produced along somewhat rigid disciplinary boundaries and an even more rigid sociotechnical divide – wherein technical and social scholarship are seldomly brought into a conversation. This is the first volume to address these themes through a comprehensive and cross-disciplinary approach. With the intent of exploring the question ‘what is at stake with the use of automation in international conflict in cyberspace through AI?’, the chapters in the volume focus on three broad themes, namely: (1) technical and operational, (2) strategic and geopolitical and (3) normative and legal. These also constitute the three parts in which the chapters of this volume are organised, although these thematic sections should not be considered as an analytical or a disciplinary demarcation
    • 

    corecore