323 research outputs found
A Nutritional Label for Rankings
Algorithmic decisions often result in scoring and ranking individuals to
determine credit worthiness, qualifications for college admissions and
employment, and compatibility as dating partners. While automatic and seemingly
objective, ranking algorithms can discriminate against individuals and
protected groups, and exhibit low diversity. Furthermore, ranked results are
often unstable --- small changes in the input data or in the ranking
methodology may lead to drastic changes in the output, making the result
uninformative and easy to manipulate. Similar concerns apply in cases where
items other than individuals are ranked, including colleges, academic
departments, or products.
In this demonstration we present Ranking Facts, a Web-based application that
generates a "nutritional label" for rankings. Ranking Facts is made up of a
collection of visual widgets that implement our latest research results on
fairness, stability, and transparency for rankings, and that communicate
details of the ranking methodology, or of the output, to the end user. We will
showcase Ranking Facts on real datasets from different domains, including
college rankings, criminal risk assessment, and financial services.Comment: 4 pages, SIGMOD demo, 3 figuress, ACM SIGMOD 201
Human rights and public education
This article attempts a contrast to the contribution by Hugh Starkey. Rather than his account of the inexorable rise of human rights discourse, and of the implementation of human rights standards, human rights are here presented as always and necessarily scandalous and highly contested. First, I explain why the UK has lagged so far behind its European neighbours in implementing citizenship education. Second, a comparison with France shows that the latest UK reforms bring us up to 1789. Third, the twentieth-century second-generation social and economic rights are still anathema in the UK. Fourth, the failure to come to terms with Empire and especially the slave trade means that the UK’s attitude to third-generation rights, especially the right of peoples to self-determination, is heavily compromised. Taking into account the points I raise, citizenship education in the UK might look very different
Does a Fair Model Produce Fair Explanations? Relating Distributive and Procedural Fairness
We consider interactions between fairness and explanations in neural networks. Fair machine learning aims to achieve equitable allocation of resources --- distributive fairness --- by balancing accuracy and error rates across protected groups or among similar individuals. Methods shown to improve distributive fairness can induce different model behavior between majority and minority groups. This divergence in behavior can be perceived as disparate treatment, undermining acceptance of the system. In this paper, we use feature attribution methods to measure the average explanations for a protected group, and show that differences can occur even when the model is fair. We prove a surprising relationship between explanations (via feature attribution) and fairness (in a regression setting), demonstrating that under moderate assumptions, there are circumstances when controlling one can influence the other. We then study this relationship experimentally by designing a novel loss term for explanations called GroupWise Attribution Divergence (GWAD) and comparing its effects with an existing family of loss terms for (distributive) fairness. We show that controlling explanation loss tends to preserve accuracy. We also find that controlling distributive fairness loss tends to also reduce explanation loss empirically, even though it is not guaranteed to do so theoretically. We also show that there are additive improvements by including both loss terms. We conclude by considering the implications for trust and policy of reasoning about fairness as manipulations of explanations
- …
