16 research outputs found

    How I Would have been Differently Treated: Discrimination Through the Lens of Counterfactual Fairness

    Full text link
    The widespread use of algorithms for prediction-based decisions urges us to consider the question of what it means for a given act or practice to be discriminatory. Building upon work by Kusner and colleagues in the field of machine learning, we propose a counterfactual condition as a necessary requirement on discrimination. To demonstrate the philosophical relevance of the proposed condition, we consider two prominent accounts of discrimination in the recent literature, by Lippert-Rasmussen and Hellman respectively, that do not logically imply our condition and show that they face important objections. Specifically, Lippert-Rasmussen’s definition proves to be over-inclusive, as it classifies some acts or practices as discriminatory when they are not, whereas Hellman’s account turns out to lack explanatory power precisely insofar as it does not countenance a counterfactual condition on discrimination. By defending the necessity of our counterfactual condition, we set the conceptual limits for justified claims about the occurrence of discriminatory acts or practices in society, with immediate applications to the ethics of algorithmic decision-making

    Investigating Employees’ Concerns and Wishes Regarding Digital Stress Management Interventions With Value Sensitive Design: Mixed Methods Study

    Full text link
    Background: Work stress places a heavy economic and disease burden on society. Recent technological advances include digital health interventions for helping employees prevent and manage their stress at work effectively. Although such digital solutions come with an array of ethical risks, especially if they involve biomedical big data, the incorporation of employees' values in their design and deployment has been widely overlooked. Objective: To bridge this gap, we used the value sensitive design (VSD) framework to identify relevant values concerning a digital stress management intervention (dSMI) at the workplace, assess how users comprehend these values, and derive specific requirements for an ethics-informed design of dSMIs. VSD is a theoretically grounded framework that front-loads ethics by accounting for values throughout the design process of a technology. Methods: We conducted a literature search to identify relevant values of dSMIs at the workplace. To understand how potential users comprehend these values and derive design requirements, we conducted a web-based study that contained closed and open questions with employees of a Swiss company, allowing both quantitative and qualitative analyses. Results: The values health and well-being, privacy, autonomy, accountability, and identity were identified through our literature search. Statistical analysis of 170 responses from the web-based study revealed that the intention to use and perceived usefulness of a dSMI were moderate to high. Employees' moderate to high health and well-being concerns included worries that a dSMI would not be effective or would even amplify their stress levels. Privacy concerns were also rated on the higher end of the score range, whereas concerns regarding autonomy, accountability, and identity were rated lower. Moreover, a personalized dSMI with a monitoring system involving a machine learning-based analysis of data led to significantly higher privacy (P=.009) and accountability concerns (P=.04) than a dSMI without a monitoring system. In addition, integrability, user-friendliness, and digital independence emerged as novel values from the qualitative analysis of 85 text responses. Conclusions: Although most surveyed employees were willing to use a dSMI at the workplace, there were considerable health and well-being concerns with regard to effectiveness and problem perpetuation. For a minority of employees who value digital independence, a nondigital offer might be more suitable. In terms of the type of dSMI, privacy and accountability concerns must be particularly well addressed if a machine learning-based monitoring component is included. To help mitigate these concerns, we propose specific requirements to support the VSD of a dSMI at the workplace. The results of this work and our research protocol will inform future research on VSD-based interventions and further advance the integration of ethics in digital health

    Ethical, Legal and Social Issues of Big Data - A Comprehensive Overview

    Full text link
    The ELSI White Paper is the final achievement of the ELSI Task Force for the National Research Programme “Big Data” (NRP 75). It is an informational document that provides an overview of the key ethical, legal, and social challenges of big data and provides guidance for the collection, use, and sharing of big data. The document aims to bring together the expertise of the ELSI Task Force members rather than exhaustively covering all topics in big data relating to ethical, legal, and social issues (ELSI). The white paper comprises two parts: main articles and commentaries on it. The main articles give an overview of the major concerns associated with the use of big data, based on the assessment of the participating researchers. The commentary articles either examine in depth one or more of the issues that are presented in the main articles or highlight other issues that are considered relevant by their authors but are not covered in the main articles. The main articles are divided into three sections corresponding to the three ELSI levels of analysis. In the section on ethics, Marcello Ienca explores the threat of big data to ethics commissions, privacy rights, personal autonomy, and equality in the healthcare sector and biomedical research. Bernice Elger focuses on the need to address informed consent differently and complement it with additional mechanisms in the big data context. In the legal section, Christophe Schneble explores whether current Swiss data protection laws adequately regulate and protect individuals’ data. Eleonora Viganò analyses the threat of big data to state sovereignty and explore the two contrasting acceptations of the term “digital sovereignty” in the context of big data. In the section on social issues, Markus Christen addresses the big data divide, namely, the uneven distribution of benefits and harms from big data and the connected issue of the transparency asymmetry between data givers and data owners. Michele Loi delves into the debate on fair algorithms, presenting the risks of discriminating against certain groups when adopting big data-based predictive algorithms, such as those for predicting inmates’ recidivism. The second part of the ELSI White Paper contains three commentaries. In the first, Mira Burri focuses on the viability of new approaches to global trade governance that seek to address big data issues and makes recommendations for a better informed and more proactive Swiss approach. In the second commentary, David Shaw explores the lack of protection for vulnerable groups in big data research and the temporospatial and moral distance between researchers and participants that increases the risk of exploitation. In the third commentary, Christian Hauser tackles big data from the perspective of business ethics and provides guidance to companies employing big data. Keywords: Big data, informed consent, data protection law, big data divide, digital sovereignty, health data, discrimination, big data research, big data in industry JEL Classification: O3, F1

    Il corretto amor di sé come superamento dell'individualismo e dell'egoismo nel soggetto di Adam Smith

    No full text
    Contrariamente all’interpretazione tradizionale di Adam Smith, nel soggetto smithiano coesistono coerentemente valori morali e interessi economici. Il perno dell’unità di tale soggetto risiede nella virtù della prudenza. Questa virtù, concepita come il perseguimento degli interessi personali che lo spettatore imparziale ha approvato e considerata congiuntamente al processo di maturazione morale individuale, è la guida che il soggetto adotta nella determinazione di ciò che deve a se stesso e ciò che deve agli altri all’interno del proprio personale progetto di vita

    The individual’s later self is less autonomous and a stranger: the impact of time in advance directives

    Full text link
    In intertemporal choices, time alters the distribution of decisional power over one’s life and detaches the individual from her later self. These two effects of time increase the later self’s vulnerability, when its will is in conflict with the earlier self’s will as expressed in advance directives (ADs). For this reason, ADs should take into account the impact of time on the future self by including strategies that protect the latter

    The societal and ethical relevance of computational creativity

    Full text link
    In this paper, we provide a philosophical account of the value of creative systems for individuals and society. We characterize creativity in very broad philosophical terms, encompassing natural, existential, and social creative processes, such as natural evolution and entrepreneurship, and explain why creativity understood in this way is instrumental for advancing human well-being in the long term. We then explain why current mainstream AI tends to be anti-creative, which means that there are moral costs of employing this type of AI in human endeavors, although computational systems that involve creativity are on the rise. In conclusion, there is an argument for ethics to be more hospitable to creativity-enabling AI, which can also be in a trade-off with other values promoted in AI ethics, such as its explainability and accuracy

    In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions

    Full text link
    Real engines of the artificial intelligence (AI) revolution, machine learning (ML) models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In this contribution, we will focus on selected ethical investigations around AI by proposing an incremental model of trust that can be applied to both human-human and human-AI interactions. Starting with a quick overview of the existing accounts of trust, with special attention to Taddeo’s concept of “e-trust,” we will discuss all the components of the proposed model and the reasons to trust in human-AI interactions in an example of relevance for business organizations. We end this contribution with an analysis of the epistemic and pragmatic reasons of trust in human-AI interactions and with a discussion of kinds of normativity in trustworthiness of AIs

    Cybersecurity of critical infrastructure

    Full text link
    This chapter provides a political and philosophical analysis of the values at stake in ensuring cybersecurity for critical infrastructures. It presents a review of the boundaries of cybersecurity in national security, with a focus on the ethics of surveillance for protecting critical infrastructures and the use of AI. A bibliographic analysis of the literature is applied until 2016 to identify and discuss the cybersecu- rity value conflicts and ethical issues in national security. This is integrated with an analysis of the most recent literature on cyber-threats to national infrastructure and the role of AI. This chapter demonstrates that the increased connectedness of digital and non-digital infrastructure enhances the trade-offs between values identified in the literature of the past years, and supports this thesis with the analysis of four case studies

    People are not coins. Morally distinct types of predictions necessitate different fairness constraints

    Full text link
    A recent paper (Hedden 2021) has argued that most of the group fairness constraints discussed in the machine learning literature are not necessary conditions for the fairness of predictions, and hence that there are no genuine fairness metrics. This is proven by discussing a special case of a fair prediction. In our paper, we show that Hedden 's argument does not hold for the most common kind of predictions used in data science, which are about people and based on data from similar people; we call these human-group-based practices. We argue that there is a morally salient distinction between human-group-based practices and those that are based on data of only one person, which we call human-individual-based practices. Thus, what may be a necessary condition for the fairness of human-group-based practices may not be a necessary condition for the fairness of human-individual-based practices, on which Hedden 's argument is based. Accordingly, the group fairness metrics discussed in the machine learning literature may still be relevant for most applications of prediction-based decision making
    corecore