49 research outputs found

    Designing the Health-related Internet of Things: Ethical Principles and Guidelines

    Get PDF
    The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be rapid. However, a host of ethical concerns are also raised that must be addressed. The inherent sensitivity of health-related data being generated and latent risks of Internet-enabled devices pose serious challenges. Users, already in a vulnerable position as patients, face a seemingly impossible task to retain control over their data due to the scale, scope and complexity of systems that create, aggregate, and analyse personal health data. In response, the H-IoT must be designed to be technologically robust and scientifically reliable, while also remaining ethically responsible, trustworthy, and respectful of user rights and interests. To assist developers of the H-IoT, this paper describes nine principles and nine guidelines for ethical design of H-IoT devices and data protocols

    The Ethical Implications of Personal Health Monitoring

    Get PDF
    Personal Health Monitoring (PHM) uses electronic devices which monitor and record health-related data outside a hospital, usually within the home. This paper examines the ethical issues raised by PHM. Eight themes describing the ethical implications of PHM are identified through a review of 68 academic articles concerning PHM. The identified themes include privacy, autonomy, obtrusiveness and visibility, stigma and identity, medicalisation, social isolation, delivery of care, and safety and technological need. The issues around each of these are discussed. The system / lifeworld perspective of Habermas is applied to develop an understanding of the role of PHMs as mediators of communication between the institutional and the domestic environment. Furthermore, links are established between the ethical issues to demonstrate that the ethics of PHM involves a complex network of ethical interactions. The paper extends the discussion of the critical effect PHMs have on the patient’s identity and concludes that a holistic understanding of the ethical issues surrounding PHMs will help both researchers and practitioners in developing effective PHM implementations

    Explaining Explanations in AI

    Get PDF
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly

    Principles alone cannot guarantee ethical AI

    Full text link
    AI Ethics is now a global topic of discussion in academic and policy circles. At least 84 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.Comment: A previous, pre-print version of this paper was entitled 'AI Ethics - Too Principled to Fail?

    Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

    Get PDF
    There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system

    Transparent, explainable, and accountable AI for robotics

    Get PDF
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems

    The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

    Full text link
    The technical progression of artificial intelligence (AI) research has been built on breakthroughs in fields such as computer science, statistics, and mathematics. However, in the past decade AI researchers have increasingly looked to the social sciences, turning to human interactions to solve the challenges of model development. Paying crowdsourcing workers to generate or curate data, or data enrichment, has become indispensable for many areas of AI research, from natural language processing to reinforcement learning from human feedback (RLHF). Other fields that routinely interact with crowdsourcing workers, such as Psychology, have developed common governance requirements and norms to ensure research is undertaken ethically. This study explores how, and to what extent, comparable research ethics requirements and norms have developed for AI research and data enrichment. We focus on the approach taken by two leading conferences: ICLR and NeurIPS, and journal publisher Springer. In a longitudinal study of accepted papers, and via a comparison with Psychology and CHI papers, this work finds that leading AI venues have begun to establish protocols for human data collection, but these are are inconsistently followed by authors. Whilst Psychology papers engaging with crowdsourcing workers frequently disclose ethics reviews, payment data, demographic data and other information, similar disclosures are far less common in leading AI venues despite similar guidance. The work concludes with hypotheses to explain these gaps in research ethics practices and considerations for its implications.Comment: 10 page

    On the Ethical Implications of Personal Health Monitoring

    Get PDF
    Recent years have seen an influx of medical technologies capable of remotely monitoring the health and behaviours of individuals to detect, manage and prevent health problems. Known collectively as personal health monitoring (PHM), these systems are intended to supplement medical care with health monitoring outside traditional care environments such as hospitals, ranging in complexity from mobile devices to complex networks of sensors measuring physiological parameters and behaviours. This research project assesses the potential ethical implications of PHM as an emerging medical technology, amenable to anticipatory action intended to prevent or mitigate problematic ethical issues in the future. PHM fundamentally changes how medical care can be delivered: patients can be monitored and consulted at a distance, eliminating opportunities for face-to-face actions and potentially undermining the importance of social, emotional and psychological aspects of medical care. The norms evident in this movement may clash with existing standards of ‘good’ medical practice from the perspective of patients, clinicians and institutions. By relating utilitarianism, virtue ethics and theories of surveillance to Habermas’ concept of colonisation of the lifeworld, a conceptual framework is created which can explain how PHM may be allowed to change medicine as a practice in an ethically problematic way. The framework relates the inhibition of virtuous behaviour among practitioners of medicine, understood as a moral practice, to the movement in medicine towards remote monitoring. To assess the explanatory power of the conceptual framework and expand its borders, a qualitative interview empirical study with potential users of PHM in England is carried out. Recognising that the inherent uncertainty of the future undermines the validity of empirical research, a novel epistemological framework based in Habermas’ discourse ethics is created to justify the empirical study. By developing Habermas’ concept of translation into a procedure for assessing the credibility of uncertain normative claims about the future, a novel methodology for empirical ethical assessment of emerging technologies is created and tested. Various methods of analysis are employed, including review of academic discourses, empirical and theoretical analyses of the moral potential of PHM. Recommendations are made concerning ethical issues in the deployment and design of PHM systems, analysis and application of PHM data, and the shortcomings of existing research and protection mechanisms in responding to potential ethical implications of the technology.he research described in this thesis was sponsored and funded by the Centre for Computing and Social Responsibility of De Montfort University, and was linked to the research carried out in FP7 research projects PHM-Ethics (GA 230602) and ETICA (Ethical Issues of Emerging ICT Applications, GA 230318)

    The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default

    Full text link
    In recent years fairness in machine learning (ML) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off, or by bringing better performing groups down to the level of the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. This paper examines the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. We propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. We likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default. N.B. Shortened abstract, see paper for full abstract

    Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

    Full text link
    This article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement that aligns with the European Court of Justice's "gold standard." Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of automated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law. N.B. Abridged abstrac
    corecore