CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
SLIDE: A surrogate fairness constraint to ensure fairness consistency
Authors
Kunwoong Kim
Sara Kim
Yongdai Kim
Ilsang Ohn
Publication date
1 October 2022
Publisher
Pergamon Press Ltd.
Abstract
© 2022 Elsevier LtdAs they take a crucial role in social decision makings, AI algorithms based on ML models should be not only accurate but also fair. Among many algorithms for fair AI, learning a prediction ML model by minimizing the empirical risk (e.g., cross-entropy) subject to a given fairness constraint has received much attention. To avoid computational difficulty, however, a given fairness constraint is replaced by a surrogate fairness constraint as the 0–1 loss is replaced by a convex surrogate loss for classification problems. In this paper, we investigate the validity of existing surrogate fairness constraints and propose a new surrogate fairness constraint called SLIDE, which is computationally feasible and asymptotically valid in the sense that the learned model satisfies the fairness constraint asymptotically and achieves a fast convergence rate. Numerical experiments confirm that the SLIDE works well for various benchmark datasets.N
Similar works
Full text
Available Versions
SNU Open Repository and Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:s-space.snu.ac.kr:10371/18...
Last time updated on 29/10/2022