23 research outputs found

    Individual Fairness in Hindsight

    Full text link
    Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CaFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run

    Taming Wild Price Fluctuations: Monotone Stochastic Convex Optimization with Bandit Feedback

    Full text link
    Prices generated by automated price experimentation algorithms often display wild fluctuations, leading to unfavorable customer perceptions and violations of individual fairness: e.g., the price seen by a customer can be significantly higher than what was seen by her predecessors, only to fall once again later. To address this concern, we propose demand learning under a monotonicity constraint on the sequence of prices, within the framework of stochastic convex optimization with bandit feedback. Our main contribution is the design of the first sublinear-regret algorithms for monotonic price experimentation for smooth and strongly concave revenue functions under noisy as well as noiseless bandit feedback. The monotonicity constraint presents a unique challenge: since any increase (or decrease) in the decision-levels is final, an algorithm needs to be cautious in its exploration to avoid over-shooting the optimum. At the same time, minimizing regret requires that progress be made towards the optimum at a sufficient pace. Balancing these two goals is particularly challenging under noisy feedback, where obtaining sufficiently accurate gradient estimates is expensive. Our key innovation is to utilize conservative gradient estimates to adaptively tailor the degree of caution to local gradient information, being aggressive far from the optimum and being increasingly cautious as the prices approach the optimum. Importantly, we show that our algorithms guarantee the same regret rates (up to logarithmic factors) as the best achievable rates of regret without the monotonicity requirement

    Advancing Subgroup Fairness via Sleeping Experts

    Get PDF
    We study methods for improving fairness to subgroups in settings with overlapping populations and sequential predictions. Classical notions of fairness focus on the balance of some property across different populations. However, in many applications the goal of the different groups is not to be predicted equally but rather to be predicted well. We demonstrate that the task of satisfying this guarantee for multiple overlapping groups is not straightforward and show that for the simple objective of unweighted average of false negative and false positive rate, satisfying this for overlapping populations can be statistically impossible even when we are provided predictors that perform well separately on each subgroup. On the positive side, we show that when individuals are equally important to the different groups they belong to, this goal is achievable; to do so, we draw a connection to the sleeping experts literature in online learning. Motivated by the one-sided feedback in natural settings of interest, we extend our results to such a feedback model. We also provide a game-theoretic interpretation of our results, examining the incentives of participants to join the system and to provide the system full information about predictors they may possess. We end with several interesting open problems concerning the strength of guarantees that can be achieved in a computationally efficient manner

    Multiplicative Metric Fairness Under Composition

    Get PDF
    Dwork, Hardt, Pitassi, Reingold, & Zemel [Dwork et al., 2012] introduced two notions of fairness, each of which is meant to formalize the notion of similar treatment for similarly qualified individuals. The first of these notions, which we call additive metric fairness, has received much attention in subsequent work studying the fairness of a system composed of classifiers which are fair when considered in isolation [Chawla and Jagadeesan, 2020; Chawla et al., 2022; Dwork and Ilvento, 2018; Dwork et al., 2020; Ilvento et al., 2020] and in work studying the relationship between fair treatment of individuals and fair treatment of groups [Dwork et al., 2012; Dwork and Ilvento, 2018; Kim et al., 2018]. Here, we extend these lines of research to the second, less-studied notion, which we call multiplicative metric fairness. In particular, we exactly characterize the fairness of conjunctions and disjunctions of multiplicative metric fair classifiers, and the extent to which a classifier which satisfies multiplicative metric fairness also treats groups fairly. This characterization reveals that whereas additive metric fairness becomes easier to satisfy when probabilities of acceptance are small, leading to unfairness under functional and group compositions, multiplicative metric fairness is better-behaved, due to its scale-invariance

    Sharing the Load: Considering Fairness in De-energization Scheduling to Mitigate Wildfire Ignition Risk using Rolling Optimization

    Full text link
    Wildfires are a threat to public safety and have increased in both frequency and severity due to climate change. To mitigate wildfire ignition risks, electric power system operators proactively de-energize high-risk power lines during "public safety power shut-off" (PSPS) events. Line de-energizations can cause communities to lose power, which may result in negative economic, health, and safety impacts. Furthermore, the same communities may repeatedly experience power shutoffs over the course of a wildfire season, which compounds these negative impacts. However, there may be many combinations of power lines whose de-energization will result in about the same reduction of system-wide wildfire risk, but the associated power outages affect different communities. Therefore, one may raise concerns regarding the fairness of de-energization decisions. Accordingly, this paper proposes a framework to select lines to de-energize in order to balance wildfire risk reduction, total load shedding, and fairness considerations. The goal of the framework is to prevent a small fraction of communities from disproportionally being impacted by PSPS events, and to instead more equally share the burden of power outages. For a geolocated test case in the southwestern United States, we use actual California demand data as well as real wildfire risk forecasts to simulate PSPS events during the 2021 wildfire season and compare the performance of various methods for promoting fairness. Our results demonstrate that the proposed formulation can provide significantly more fair outcomes with limited impacts on system-wide performance.Comment: 8 pages, 4 figure

    Certification of Distributional Individual Fairness

    Full text link
    Providing formal guarantees of algorithmic fairness is of paramount importance to socially responsible deployment of machine learning algorithms. In this work, we study formal guarantees, i.e., certificates, for individual fairness (IF) of neural networks. We start by introducing a novel convex approximation of IF constraints that exponentially decreases the computational cost of providing formal guarantees of local individual fairness. We highlight that prior methods are constrained by their focus on global IF certification and can therefore only scale to models with a few dozen hidden neurons, thus limiting their practical impact. We propose to certify distributional individual fairness which ensures that for a given empirical distribution and all distributions within a γ\gamma-Wasserstein ball, the neural network has guaranteed individually fair predictions. Leveraging developments in quasi-convex optimization, we provide novel and efficient certified bounds on distributional individual fairness and show that our method allows us to certify and regularize neural networks that are several orders of magnitude larger than those considered by prior works. Moreover, we study real-world distribution shifts and find our bounds to be a scalable, practical, and sound source of IF guarantees.Comment: 21 Pages, Neural Information Processing Systems 202

    Improving Fairness and Privacy in Selection Problems

    Full text link
    Supervised learning models have been increasingly used for making decisions about individuals in applications such as hiring, lending, and college admission. These models may inherit pre-existing biases from training datasets and discriminate against protected attributes (e.g., race or gender). In addition to unfairness, privacy concerns also arise when the use of models reveals sensitive personal information. Among various privacy notions, differential privacy has become popular in recent years. In this work, we study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models. Unlike many existing works, we consider a scenario where a supervised model is used to select a limited number of applicants as the number of available positions is limited. This assumption is well-suited for various scenarios, such as job application and college admission. We use ``equal opportunity'' as the fairness notion and show that the exponential mechanisms can make the decision-making process perfectly fair. Moreover, the experiments on real-world datasets show that the exponential mechanism can improve both privacy and fairness, with a slight decrease in accuracy compared to the model without post-processing.Comment: This paper has been accepted for publication in the 35th AAAI Conference on Artificial Intelligenc

    Fair Interventions in Weighted Congestion Games

    Full text link
    In this work we study the power and limitations of fair interventions in weighted congestion games. Specifically, we focus on interventions that aim at improving the equilibrium quality (price of anarchy) and are fair in the sense that identical players receive identical treatment. Within this setting, we provide three key contributions: First, we show that no fair intervention can reduce the price of anarchy below a given factor depending solely on the class of latencies considered. Interestingly, this lower bound is unconditional, i.e., it applies regardless of how much computation interventions are allowed to use. Second, we propose a taxation mechanism that is fair and show that the resulting price of anarchy matches this lower bound, while the mechanism can be efficiently computed in polynomial time. Third, we complement these results by showing that no intervention (fair or not) can achieve a better approximation if polynomial computability is required. We do so by proving that the minimum social cost is NP-hard to approximate below a factor identical to the one previously introduced. In doing so, we also show that the randomized algorithm proposed by Makarychev and Sviridenko (Journal of the ACM, 2018) for the class of optimization problems with a "diseconomy of scale" is optimal, and provide a novel way to derandomize its solution via equilibrium computation
    corecore