866 research outputs found

    Designing the Health-related Internet of Things: Ethical Principles and Guidelines

    Get PDF
    The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be rapid. However, a host of ethical concerns are also raised that must be addressed. The inherent sensitivity of health-related data being generated and latent risks of Internet-enabled devices pose serious challenges. Users, already in a vulnerable position as patients, face a seemingly impossible task to retain control over their data due to the scale, scope and complexity of systems that create, aggregate, and analyse personal health data. In response, the H-IoT must be designed to be technologically robust and scientifically reliable, while also remaining ethically responsible, trustworthy, and respectful of user rights and interests. To assist developers of the H-IoT, this paper describes nine principles and nine guidelines for ethical design of H-IoT devices and data protocols

    The Ethical Implications of Personal Health Monitoring

    Get PDF
    Personal Health Monitoring (PHM) uses electronic devices which monitor and record health-related data outside a hospital, usually within the home. This paper examines the ethical issues raised by PHM. Eight themes describing the ethical implications of PHM are identified through a review of 68 academic articles concerning PHM. The identified themes include privacy, autonomy, obtrusiveness and visibility, stigma and identity, medicalisation, social isolation, delivery of care, and safety and technological need. The issues around each of these are discussed. The system / lifeworld perspective of Habermas is applied to develop an understanding of the role of PHMs as mediators of communication between the institutional and the domestic environment. Furthermore, links are established between the ethical issues to demonstrate that the ethics of PHM involves a complex network of ethical interactions. The paper extends the discussion of the critical effect PHMs have on the patient’s identity and concludes that a holistic understanding of the ethical issues surrounding PHMs will help both researchers and practitioners in developing effective PHM implementations

    Explaining Explanations in AI

    Get PDF
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly

    Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

    Get PDF
    There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system

    Transparent, explainable, and accountable AI for robotics

    Get PDF
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems

    Measuring gaps in customer service at spas: Are we offering our customers what they want?

    Full text link
    Currently, there is no academic standard or format that spas use to rate themselves against other spas, not is there one for spas to rate themselves on improvements based on previous surveys. This paper will offer a format for each individual spa and groups of spas, to find what the clientele’s expectations are. It will also determine if the spa is meeting or exceeding these expectations, and will create a format for all spas to use, based on the SERQUAL analysis developed by Parasuraman, Berry, and Zeithaml (1991). This tool will allow spas to survey their customers to see if their offerings are what is expected, and where they need to improve. Once completed, a group of spas could analyze and compare results

    Ethics of the health-related internet of things: a narrative review

    Get PDF
    The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many benefits for health and healthcare. However, it also raises a host of ethical problems stemming from the inherent risks of Internet enabled devices, the sensitivity of health-related data, and their impact on the delivery of healthcare. This paper maps the main ethical problems that have been identified by the relevant literature and identifies key themes in the on-going debate on ethical problems concerning H-IoT

    Principles alone cannot guarantee ethical AI

    Full text link
    AI Ethics is now a global topic of discussion in academic and policy circles. At least 84 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.Comment: A previous, pre-print version of this paper was entitled 'AI Ethics - Too Principled to Fail?

    Academic and Pentecostal: An Appreciation of Roger Stronstad

    Get PDF
    1) I offer a brief narration of Stronstad’s journey toward personal ownership of a rich Pentecostal heritage; 2) I provide a summary of Stronstad’s magnum opus, "The Charismatic Theology of St. Luke"; 3) I move to Stronstad’s later development of Christian vocation as the "Prophethood of All Believers"; 4) I address the structural design of a one-volume commentary co-edited with French Arrington; and 5) I offer a select review of Stronstad’s participation in an ongoing Pentecostal debate concerning biblical interpretation
    • …
    corecore