814 research outputs found

    Operationalizing Individual Fairness with Pairwise Fair Representations

    No full text
    We revisit the notion of individual fairness proposed by Dwork et al. A central challenge in operationalizing their approach is the difficulty in eliciting a human specification of a similarity metric. In this paper, we propose an operationalization of individual fairness that does not rely on a human specification of a distance metric. Instead, we propose novel approaches to elicit and leverage side-information on equally deserving individuals to counter subordination between social groups. We model this knowledge as a fairness graph, and learn a unified Pairwise Fair Representation (PFR) of the data that captures both data-driven similarity between individuals and the pairwise side-information in fairness graph. We elicit fairness judgments from a variety of sources, including human judgments for two real-world datasets on recidivism prediction (COMPAS) and violent neighborhood prediction (Crime & Communities). Our experiments show that the PFR model for operationalizing individual fairness is practically viable.Comment: To be published in the proceedings of the VLDB Endowment, Vol. 13, Issue.

    Exploring explanations for matrix factorization recommender systems (Position Paper)

    Get PDF
    In this paper we address the problem of finding explanations for collaborative filtering algorithms that use matrix factorization methods. We look for explanations that increase the transparency of the system. To do so, we propose two measures. First, we show a model that describes the contribution of each previous rating given by a user to the generated recommendation. Second, we measure then influence of changing each previous rating of a user on the outcome of the recommender system. We show that under the assumption that there are many more users in the system than there are items, we can efficiently generate each type of explanation by using linear approximations of the recommender system’s behavior for each user, and computing partial derivatives of predicted ratings with respect to each user’s provided ratings.http://scholarworks.boisestate.edu/fatrec/2017/1/7/Published versio

    Quantifying Information Overload in Social Media and its Impact on Social Contagions

    Full text link
    Information overload has become an ubiquitous problem in modern society. Social media users and microbloggers receive an endless flow of information, often at a rate far higher than their cognitive abilities to process the information. In this paper, we conduct a large scale quantitative study of information overload and evaluate its impact on information dissemination in the Twitter social media site. We model social media users as information processing systems that queue incoming information according to some policies, process information from the queue at some unknown rates and decide to forward some of the incoming information to other users. We show how timestamped data about tweets received and forwarded by users can be used to uncover key properties of their queueing policies and estimate their information processing rates and limits. Such an understanding of users' information processing behaviors allows us to infer whether and to what extent users suffer from information overload. Our analysis provides empirical evidence of information processing limits for social media users and the prevalence of information overloading. The most active and popular social media users are often the ones that are overloaded. Moreover, we find that the rate at which users receive information impacts their processing behavior, including how they prioritize information from different sources, how much information they process, and how quickly they process information. Finally, the susceptibility of a social media user to social contagions depends crucially on the rate at which she receives information. An exposure to a piece of information, be it an idea, a convention or a product, is much less effective for users that receive information at higher rates, meaning they need more exposures to adopt a particular contagion.Comment: To appear at ICSWM '1

    iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

    Get PDF
    People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: giving adequate success rates to specifically protected groups. In contrast, the alternative paradigm of individual fairness has received relatively little attention, and this paper advances this less explored direction. The paper introduces a method for probabilistically mapping user records into a low-rank representation that reconciles individual fairness and the utility of classifiers and rankings in downstream applications. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on a variety of real-world datasets. Our experiments show substantial improvements over the best prior work for this setting.Comment: Accepted at ICDE 2019. Please cite the ICDE 2019 proceedings versio

    {iFair}: {L}earning Individually Fair Data Representations for Algorithmic Decision Making

    Get PDF
    People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: ensuring that each ethnic or social group receives its fair share in the outcome of classifiers and rankings. In contrast, the alternative paradigm of individual fairness has received relatively little attention. This paper introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. Since the case for fairness is ubiquitous across many tasks, we aim to learn general representations that can be applied to arbitrary downstream use-cases. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on two real-world datasets. Our experiments show substantial improvements over the best prior work for this setting

    Index Coding: Rank-Invariant Extensions

    Full text link
    An index coding (IC) problem consisting of a server and multiple receivers with different side-information and demand sets can be equivalently represented using a fitting matrix. A scalar linear index code to a given IC problem is a matrix representing the transmitted linear combinations of the message symbols. The length of an index code is then the number of transmissions (or equivalently, the number of rows in the index code). An IC problem Iext{\cal I}_{ext} is called an extension of another IC problem I{\cal I} if the fitting matrix of I{\cal I} is a submatrix of the fitting matrix of Iext{\cal I}_{ext}. We first present a straightforward mm\textit{-order} extension Iext{\cal I}_{ext} of an IC problem I{\cal I} for which an index code is obtained by concatenating mm copies of an index code of I{\cal I}. The length of the codes is the same for both I{\cal I} and Iext{\cal I}_{ext}, and if the index code for I{\cal I} has optimal length then so does the extended code for Iext{\cal I}_{ext}. More generally, an extended IC problem of I{\cal I} having the same optimal length as I{\cal I} is said to be a \textit{rank-invariant} extension of I{\cal I}. We then focus on 22-order rank-invariant extensions of I{\cal I}, and present constructions of such extensions based on involutory permutation matrices

    Optimal Index Codes via a Duality between Index Coding and Network Coding

    Full text link
    In Index Coding, the goal is to use a broadcast channel as efficiently as possible to communicate information from a source to multiple receivers which can possess some of the information symbols at the source as side-information. In this work, we present a duality relationship between index coding (IC) and multiple-unicast network coding (NC). It is known that the IC problem can be represented using a side-information graph GG (with number of vertices nn equal to the number of source symbols). The size of the maximum acyclic induced subgraph, denoted by MAISMAIS is a lower bound on the \textit{broadcast rate}. For IC problems with MAIS=n−1MAIS=n-1 and MAIS=n−2MAIS=n-2, prior work has shown that binary (over F2{\mathbb F}_2) linear index codes achieve the MAISMAIS lower bound for the broadcast rate and thus are optimal. In this work, we use the the duality relationship between NC and IC to show that for a class of IC problems with MAIS=n−3MAIS=n-3, binary linear index codes achieve the MAISMAIS lower bound on the broadcast rate. In contrast, it is known that there exists IC problems with MAIS=n−3MAIS=n-3 and optimal broadcast rate strictly greater than MAISMAIS

    Equity of Attention: Amortizing Individual Fairness in Rankings

    Get PDF
    Rankings of people and items are at the heart of selection-making, match-making, and recommender systems, ranging from employment sites to sharing economy platforms. As ranking positions influence the amount of attention the ranked subjects receive, biases in rankings can lead to unfair distribution of opportunities and resources, such as jobs or income. This paper proposes new measures and mechanisms to quantify and mitigate unfairness from a bias inherent to all rankings, namely, the position bias, which leads to disproportionately less attention being paid to low-ranked subjects. Our approach differs from recent fair ranking approaches in two important ways. First, existing works measure unfairness at the level of subject groups while our measures capture unfairness at the level of individual subjects, and as such subsume group unfairness. Second, as no single ranking can achieve individual attention fairness, we propose a novel mechanism that achieves amortized fairness, where attention accumulated across a series of rankings is proportional to accumulated relevance. We formulate the challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem and show that it can be solved as an integer linear program. Our experimental evaluation reveals that unfair attention distribution in rankings can be substantial, and demonstrates that our method can improve individual fairness while retaining high ranking quality.Comment: Accepted to SIGIR 201

    Optimizing the Recency-Relevancy Trade-off in Online News Recommendations

    No full text
    • …
    corecore