32,102 research outputs found

    Empowerment or Engagement? Digital Health Technologies for Mental Healthcare

    Get PDF
    We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare

    Think Tank Review Issue 68 June 2019

    Get PDF

    Governance fix? Power and politics in controversies about governing generative AI

    Get PDF
    The launch of ChatGPT in late 2022 led to major controversies about the governance of generative artificial intelligence (AI). This article examines the first international governance and policy initiatives dedicated specifically to generative AI: the G7 Hiroshima process, the Organisation for Economic Cooperation and Development reports, and the UK AI Safety Summit. This analysis is informed by policy framing and governance literature, in particular by the work on technology governance and Responsible Innovation. Emerging governance of generative AI exhibits characteristics of polycentric governance, where multiple and overlapping centers of decision-making are in collaborative relationships. However, it is dominated by a limited number of developed countries. The governance of generative AI is mostly framed in terms of the risk management, largely neglecting issues of purpose and direction of innovation, and assigning rather limited roles to the public. We can see a “paradox of generative AI governance” emerging, namely, that while this technology is being widely used by the public, its governance is rather narrow. This article coins the term “governance fix” to capture this rather narrow and technocratic approach to governing generative AI. As an alternative, it suggests embracing the politics of polycentric governance and Responsible Innovation that highlight democratic and participatory co-shaping of technology for social benefit. In the context of the highly unequal distribution of power in generative AI characterized by a high concentration of power in a small number of large tech companies, the government has a special role in reshaping the power imbalances by enabling wide-ranging public participation in the governance of generative AI

    Investigating Responsible AI for Scientific Research: An Empirical Study

    Full text link
    Scientific research organizations that are developing and deploying Artificial Intelligence (AI) systems are at the intersection of technological progress and ethical considerations. The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development, championing core values like fairness, accountability, and transparency. For scientific research organizations, prioritizing these practices is paramount not just for mitigating biases and ensuring inclusivity, but also for fostering trust in AI systems among both users and broader stakeholders. In this paper, we explore the practices at a research organization concerning RAI practices, aiming to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development. We have adopted a mixed-method research approach, utilising a comprehensive survey combined with follow-up in-depth interviews with selected participants from AI-related projects. Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks. This revealed an overarching underestimation of the ethical risks that AI technologies can present, especially when implemented without proper guidelines and governance. Our findings reveal the need for a holistic and multi-tiered strategy to uplift capabilities and better support science research teams for responsible, ethical, and inclusive AI development and deployment
    • …
    corecore