411 research outputs found

    Assessing Justice: Exploring the Role of Values in Defining Fairness

    Get PDF
    Assessments of justice can vary due to its dual objective and subjective dimensions. Objectively, justice can be achieved through the establishment of a distribution system that follows predetermined standards. However, subjective evaluations of justice can be influenced by psychological factors that differ among individuals. Many psychological factors influence an individual's assessment of a particular distribution system. This research aims to explore subjective assessments of justice by involving an individual's inherent factors, namely the value variable, while also examining the values inherent in each individual. As a result, personally-oriented values such as achievement, power, and hedonism have a correlation with procedural justice assessments and distributive justice assessments that are fair. Furthermore, collectively-oriented values such as virtue, universalism, and conformity also have a correlation with procedural justice assessments and distributive justice assessments that are fairly distributed

    Fairness for ABR multipoint-to-point connections

    Full text link
    In multipoint-to-point connections, the traffic at the root (destination) is the combination of all traffic originating at the leaves. A crucial concern in the case of multiple senders is how to define fairness within a multicast group and among groups and point-to-point connections. Fairness definition can be complicated since the multipoint connection can have the same identifier (VPI/VCI) on each link, and senders might not be distinguishable in this case. Many rate allocation algorithms implicitly assume that there is only one sender in each VC, which does not hold for multipoint-to-point cases. We give various possibilities for defining fairness for multipoint connections, and show the tradeoffs involved. In addition, we show that ATM bandwidth allocation algorithms need to be adapted to give fair allocations for multipoint-to-point connections.Comment: Proceedings of SPIE 98, November 199

    Issues of Fairness in International Trade Agreements

    Get PDF
    In this paper, we first describe the characteristics of the World Trade Organization (WTO) that are the basis of the framework of the multilateral trading system. We then provide an overview of concepts of fairness in trade agreements. Thereafter, we offer a critique of the efficiency criterion in assessing multilateral trade agreements, taking issue with T.N. Srinivasan’s (2006) analysis and then elaborate on our conception of fairness as reflected in agreements covering market access. We also address considerations of distributive justice, in contrast with Srinivasan’s contention that distributive justice has no role to play in the design and negotiation of multilateral trade agreements. Finally, we question bilateral trade agreements from the standpoint of fairness, drawing on the example of the U.S. bilateral FTA negotiated in 2005 with Central America and the Dominican Republic.Fairness, Equality of Opportunity, Distributive Equity

    Recommender systems fairness evaluation via generalized cross entropy

    Full text link
    Fairness in recommender systems has been considered with respect to sensitive attributes of users (e.g., gender, race) or items (e.g., revenue in a multistakeholder setting). Regardless, the concept has been commonly interpreted as some form of equality – i.e., the degree to which the system is meeting the information needs of all its users in an equal sense. In this paper, we argue that fairness in recommender systems does not necessarily imply equality, but instead it should consider a distribution of resources based on merits and needs.We present a probabilistic framework based ongeneralized cross entropy to evaluate fairness of recommender systems under this perspective, wherewe showthat the proposed framework is flexible and explanatory by allowing to incorporate domain knowledge (through an ideal fair distribution) that can help to understand which item or user aspects a recommendation algorithm is over- or under-representing. Results on two real-world datasets show the merits of the proposed evaluation framework both in terms of user and item fairnessThis work was supported in part by the Center for Intelligent Information Retrieval and in part by project TIN2016-80630-P (MINECO

    Like trainer, like bot? Inheritance of bias in algorithmic content moderation

    Get PDF
    The internet has become a central medium through which `networked publics' express their opinions and engage in debate. Offensive comments and personal attacks can inhibit participation in these spaces. Automated content moderation aims to overcome this problem using machine learning classifiers trained on large corpora of texts manually annotated for offence. While such systems could help encourage more civil debate, they must navigate inherently normatively contestable boundaries, and are subject to the idiosyncratic norms of the human raters who provide the training data. An important objective for platforms implementing such measures might be to ensure that they are not unduly biased towards or against particular norms of offence. This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence. We train classifiers on comments labelled by different demographic subsets (men and women) to understand how differences in conceptions of offence between these groups might affect the performance of the resulting models on various test sets. We conclude by discussing some of the ethical choices facing the implementers of algorithmic moderation systems, given various desired levels of diversity of viewpoints amongst discussion participants.Comment: 12 pages, 3 figures, 9th International Conference on Social Informatics (SocInfo 2017), Oxford, UK, 13--15 September 2017 (forthcoming in Springer Lecture Notes in Computer Science

    On Measuring Bias in Online Information

    Get PDF
    Bias in online information has recently become a pressing issue, with search engines, social networks and recommendation services being accused of exhibiting some form of bias. In this vision paper, we make the case for a systematic approach towards measuring bias. To this end, we discuss formal measures for quantifying the various types of bias, we outline the system components necessary for realizing them, and we highlight the related research challenges and open problems.Comment: 6 pages, 1 figur
    • …
    corecore