7 research outputs found

    Improving fairness in machine learning systems: What do industry practitioners need?

    Full text link
    The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in Computing Systems (CHI 2019

    Computational propaganda : exploring mitigation strategies for political parties in online brand contexts

    Get PDF
    Abstract : This research delves into the phenomenon of computational propaganda on social media, and draws on social media specialists from some of South Africa’s best performing brands to explore potential strategies political parties can employ to mitigate against crises that occur as a result of computational propaganda. This research is of importance given that South Africa is entering its first ever National Elections since the identification of computational propaganda as a threat to electoral processes. To date, there is no research that explores this within the South African context. The research entailed semi-structured interviews with eight social media managers, selected using the purposive non-probability sampling method. In addition to this, the research interviewed a communications head from South Africa’s largest political party in order to assess what strategies are already in place. These two sets of data were consolidated resulting in four potential strategies to mitigate against the risk of computational propaganda. The four potential mitigation strategies are grouped into two approaches, the first approach relates to preventative measures political parties can take, namely protecting brand identity and aligning communications. The second approach related to defensive measures political party brands could take in the event of a computational propaganda event, namely online reputation management and integration of communication. The research further uncovered contextual considerations political party brands must take into account before employing strategies to mitigate against crises that arise as a result of computational propaganda.M.A. (Communication Studies

    Algorithmic Decision-making, Discrimination and Disrespect: An Ethical Inquiry

    Get PDF
    The increasing use of algorithmic decision-making systems has raised significant legal and ethical concerns in several contexts of application, such as hiring, policing and sentencing. A range of literature in AI ethics shows how predictions and decisions generated on the basis of patterns in historic data may lead to discrimination against different demographic groups – those that are legally protected and/or are in positions of vulnerability, in particular. Both in the literature and public discourse, objectionable algorithmic discrimination is commonly identified as involving discriminatory intent, use of sensitive or inaccurate information data in decision-making, or as involving unintentional reproduction of systemic inequality. Some claim that algorithmic discrimination is inherently objectifying, unfair, and others find issue in the use of statistical evidence in high-stakes decision-making altogether. As is exemplified by this list of claims, the discourse exhibits considerable discrepancies regarding two questions: (i) how does discrimination arise in the development and use of algorithmic decision-making systems and (ii) what makes a given instance of algorithmic discrimination impermissible? Notably, the discussion around biased algorithms seems to have inherited conceptual problems that have long characterized the discussion on discrimination in legal and moral theory. This study approaches the phenomenon of algorithmic discrimination from the point of view of ethics of discrimination. Through exploring Benjamin Eidelson’s pluralistic, disrespect-based theory of discrimination, in particular, this study argues that while some instances may be wrong due to the issues with accuracy, unfairness, and algorithmic bias, the wrongness of algorithmic discrimination cannot be exhaustively explained by reference to these issues alone. This study suggests that some prevalent issues with discrimination in algorithmic decision-making can be traced to distinct choices and processes pertaining to the design, development and human-controlled use of algorithmic systems. However, as machine learning algorithms perform statistical discrimination by default, biased design choices and issues with “human-in-the-loop” enactment of algorithmic outputs cannot offer the full picture as to why algorithmic decision-making may have a morally objectionable disparate impact on different demographic groups. Applying Eidelson’s account – albeit with minor modifications – the wrongness of algorithmic discrimination can be explained by reference to the harm it produces, the demeaning social meaning it expresses, and the disrespectful social conduct it sustains and exacerbates by reinforcing stigma. Depending on context, algorithmic discrimination may produce significant individual and societal harms as well as reproduce patterns of behavior that go against the moral requirement that we treat each other both as moral equals and as autonomous individuals. The account also explains why formally similar but idiosyncratic instances of algorithmic discrimination which result in disadvantage for groups that are not specified by socially salient traits, such as gender, may not be morally objectionable. A possible problem with this account stems from lack of transparency in algorithmic decision-making: in constrained cases, algorithmic discrimination may be morally neutral if it is conducted in secret. While this conclusion is striking, the account is both more robust in comparison to alternative accounts, and defensible if one understands transparency as a pre-condition for the satisfaction of multiple other ethical principles, such as trust, accountability, and integrity. The study contributes to the discussion on discrimination in data mining and algorithmic decision-making by providing insight into both how discrimination may take place in novel technological contexts and how we should evaluate the morality of algorithmic decision-making in terms of dignity, respect, and harm. While room is left for further study, the study serves to clarify the conceptual ground necessary for engaging in an adequate moral evaluation of instances of algorithmic discrimination
    corecore