129,562 research outputs found
The invisible power of fairness. How machine learning shapes democracy
Many machine learning systems make extensive use of large amounts of data
regarding human behaviors. Several researchers have found various
discriminatory practices related to the use of human-related machine learning
systems, for example in the field of criminal justice, credit scoring and
advertising. Fair machine learning is therefore emerging as a new field of
study to mitigate biases that are inadvertently incorporated into algorithms.
Data scientists and computer engineers are making various efforts to provide
definitions of fairness. In this paper, we provide an overview of the most
widespread definitions of fairness in the field of machine learning, arguing
that the ideas highlighting each formalization are closely related to different
ideas of justice and to different interpretations of democracy embedded in our
culture. This work intends to analyze the definitions of fairness that have
been proposed to date to interpret the underlying criteria and to relate them
to different ideas of democracy.Comment: 12 pages, 1 figure, preprint version, submitted to The 32nd Canadian
Conference on Artificial Intelligence that will take place in Kingston,
Ontario, May 28 to May 31, 201
Transport Protocol Throughput Fairness
Interest continues to grow in alternative transport protocols to the Transmission Control Protocol (TCP). These alternatives include protocols designed to give greater efficiency in high-speed, high-delay environments (so-called high-speed TCP variants), and protocols that provide congestion control without reliability. For the former category, along with the deployed base of ‘vanilla’ TCP – TCP NewReno – the TCP variants BIC and CUBIC are widely used within Linux: for the latter category, the Datagram Congestion Control Protocol (DCCP) is currently on the IETF Standards Track. It is clear that future traffic patterns will consist of a mix of flows from these protocols (and others). So, it is important for users and network operators to be aware of the impact that these protocols may have on users. We show the measurement of fairness in throughput performance of DCCP Congestion Control ID 2 (CCID2) relative to TCP NewReno, and variants Binary Increase Congestion control (BIC), CUBIC and Compound, all in “out-of-the box” configurations. We use a testbed and endto- end measurements to assess overall throughput, and also to assess fairness – how well these protocols might respond to each other when operating over the same end-to-end network path. We find that, in our testbed, DCCP CCID2 shows good fairness with NewReno, while BIC, CUBIC and Compound show unfairness above round-trip times of 25ms
Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality
As virtually all aspects of our lives are increasingly impacted by
algorithmic decision making systems, it is incumbent upon us as a society to
ensure such systems do not become instruments of unfair discrimination on the
basis of gender, race, ethnicity, religion, etc. We consider the problem of
determining whether the decisions made by such systems are discriminatory,
through the lens of causal models. We introduce two definitions of group
fairness grounded in causality: fair on average causal effect (FACE), and fair
on average causal effect on the treated (FACT). We use the Rubin-Neyman
potential outcomes framework for the analysis of cause-effect relationships to
robustly estimate FACE and FACT. We demonstrate the effectiveness of our
proposed approach on synthetic data. Our analyses of two real-world data sets,
the Adult income data set from the UCI repository (with gender as the protected
attribute), and the NYC Stop and Frisk data set (with race as the protected
attribute), show that the evidence of discrimination obtained by FACE and FACT,
or lack thereof, is often in agreement with the findings from other studies. We
further show that FACT, being somewhat more nuanced compared to FACE, can yield
findings of discrimination that differ from those obtained using FACE.Comment: 7 pages, 2 figures, 2 tables.To appear in Proceedings of the
International Conference on World Wide Web (WWW), 201
Using self-definition to predict the influence of procedural justice on organizational, interpersonal, and job/task-oriented citizenship behaviors
An integrative self-definition model is proposed to improve our understanding of how procedural justice affects different outcome modalities in organizational behavior. Specifically, it is examined whether the strength of different levels of self-definition (collective, relational, and individual) each uniquely interact with procedural justice to predict organizational, interpersonal, and job/task-oriented citizenship behaviors, respectively. Results from experimental and (both single and multisource) field data consistently revealed stronger procedural justice effects (1) on organizational-oriented citizenship behavior among those who define themselves strongly in terms of organizational characteristics, (2) on interpersonal-oriented citizenship behavior among those who define themselves strongly in terms of their interpersonal relationships, and (3) on job/task-oriented citizenship behavior among those who define themselves weakly in terms of their distinctiveness or uniqueness. We discuss the relevance of these results with respect to how employees can be motivated most effectively in organizational settings
- …