4,789 research outputs found

    The impact of algorithmic decision-making processes on young people’s well-being

    Get PDF
    This study aims to capture the online experiences of young people when interacting with algorithm mediated systems and their impact on their well-being. We draw on qualitative (focus groups) and quantitative (survey) data from a total of 260 young people to bring their opinions to the forefront while eliciting discussions. The results of the study revealed the young people’s positive as well as negative experiences of using online platforms. Benefits such as convenience, entertainment and personalised search results were identified. However, the data also reveals participants’ concerns for their privacy, safety and trust when online, which can have a significant impact on their well-being. We conclude by recommending that online platforms acknowledge and enact on their responsibility to protect the privacy of their young users, recognising the significant developmental milestones that this group experience during these early years, and the impact that algorithm-mediated systems may have on them. We argue that governments need to incorporate policies that require technologists and others to embed the safeguarding of users’ well-being within the core of the design of Internet products and services to improve the user experiences and psychological well-being of all, but especially those of children and young people

    iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

    Get PDF
    People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: giving adequate success rates to specifically protected groups. In contrast, the alternative paradigm of individual fairness has received relatively little attention, and this paper advances this less explored direction. The paper introduces a method for probabilistically mapping user records into a low-rank representation that reconciles individual fairness and the utility of classifiers and rankings in downstream applications. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on a variety of real-world datasets. Our experiments show substantial improvements over the best prior work for this setting.Comment: Accepted at ICDE 2019. Please cite the ICDE 2019 proceedings versio

    Gender discrimination in algorithmic decision-making

    Full text link
    Resumen de la ponencia[EN] Most countries prohibit the use of Gender when deciding whether to give credit to prospective borrowers or not. The increasing application of automated algorithmic-based decision-making raises series of questions as to how the discrimination may arise and how it can be avoided. In this paper we analyse a unique proprietary dataset on car loans from an EU bank with the objective to understand if the minority status of females amplifies gender bias, and if there are ways to mitigate it. The initial results show that Gender is statistically significant, and women show lower probability of default. However, if Gender is excluded from the model, women have lower chances to be accepted for credit as compared to the situation when it is included. Women constitute only a quarter of the sample, and we investigate if this may lead to a representation bias which could amplify the discrimination. We experiment with under- and over-sampling and explore the effect of balancing the training set on mitigating discrimination. Logistic regression is used as a benchmark with further plans to include random forests. The results are applicable to other situations where predictive models based on historical data are used for decision-making. The presentation will discuss initial results and work in progress.Andreeva, G.; Matuszyk, A. (2018). Gender discrimination in algorithmic decision-making. En 2nd International Conference on Advanced Reserach Methods and Analytics (CARMA 2018). Editorial Universitat Politècnica de València. 251-251. https://doi.org/10.4995/CARMA2018.2018.8312OCS25125

    {iFair}: {L}earning Individually Fair Data Representations for Algorithmic Decision Making

    Get PDF
    People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: ensuring that each ethnic or social group receives its fair share in the outcome of classifiers and rankings. In contrast, the alternative paradigm of individual fairness has received relatively little attention. This paper introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. Since the case for fairness is ubiquitous across many tasks, we aim to learn general representations that can be applied to arbitrary downstream use-cases. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on two real-world datasets. Our experiments show substantial improvements over the best prior work for this setting

    Highly Accurate, But Still Discriminatory

    Get PDF
    The study aims to identify whether algorithmic decision making leads to unfair (i.e., unequal) treatment of certain protected groups in the recruitment context. Firms increasingly implement algorithmic decision making to save costs and increase efficiency. Moreover, algorithmic decision making is considered to be fairer than human decisions due to social prejudices. Recent publications, however, imply that the fairness of algorithmic decision making is not necessarily given. Therefore, to investigate this further, highly accurate algorithms were used to analyze a pre-existing data set of 10,000 video clips of individuals in self-presentation settings. The analysis shows that the under-representation concerning gender and ethnicity in the training data set leads to an unpredictable overestimation and/or underestimation of the likelihood of inviting representatives of these groups to a job interview. Furthermore, algorithms replicate the existing inequalities in the data set. Firms have to be careful when implementing algorithmic video analysis during recruitment as biases occur if the underlying training data set is unbalanced

    Data Analytics (Ab)Use in Healthcare Fraud Audits

    Get PDF
    This study explores how government-adopted audit data analytic tools promote the abuse of power by auditors enabling politically sensitive processes that encourage industry-wide normalization of behavior. In an audit setting, we investigate how a governmental organization enables algorithmic decision-making to alter power relationships to effect organizational and industry-wide change. While prior research has identified discriminatory threats emanating from the deployment of algorithmic decision-making, the effects of algorithmic decision-making on inherently imbalanced power relationships have received scant attention. Our results provide empirical evidence of how systemic and episodic power relationships strengthen each other, thereby enabling the governmental organization to effect social change that might be too politically prohibitive to enact directly. Overall, the results suggest that there are potentially negative effects caused by the use of algorithmic decision-making and the resulting power shifts, and these effects create a different view of the level of purported success attained through auditor use of data analytics

    A ‘Little Ethics’ for Algorithmic Decision-Making

    Get PDF
    In this paper we present a preliminary framework aimed at navigating and motivating the ethical aspects of AI systems. Following Ricoeur’s ethics we highlight distinct levels of analysis emphasising the need of personal commitment and intersubjectivity, and suggesting connection with existing AI ethics initiatives
    • …
    corecore