10 research outputs found
Recommended from our members
Canaries in Technology Mines: Warning Signs of Transformative Progress in AI
In this paper we introduce a methodology for identifying early warning signs of transformative progress in AI, to aid anticipatory governance and research prioritisation. We propose using expert elicitation methods to identify milestones in AI progress, followed by collaborative causal mapping to identify key milestones which underpin several others. We call these key milestones ‘canaries’ based on the colloquial phrase ‘canary in a coal mine’to describe advance warning of an extreme event: in this case, advance warning of transformative AI. After describing and motivating our proposed methodology, we present results from an initial implementation to identify canaries for progress towards high-level machine intelligence (HLMI). We conclude by discussing the limitations of this method, possible future improvements, and how we hope it can be used to improve monitoring of future risks from AI progress
Recommended from our members
The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions
The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a start- ing point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.Work supported by the Nuffield Foundation and Leverhulme Trus
Recommended from our members
AI and Data-driven Targeting
Thinkpiece on 'AI and Data-driven Targeting' for the UK Government's Centre for Data Ethics and InnovationLeverhulme Trus
The tension between openness and prudence in AI research
This paper explores the tension between openness and prudence in AI research, evident in two core principles of the Montréal Declaration for Responsible AI. While the AI community has strong norms around open sharing of research, concerns about the potential harms arising from misuse of research are growing, prompting some to consider whether the field of AI needs to reconsider publication norms. We discuss how different beliefs and values can lead to differing perspectives on how the AI community should manage this tension, and explore implications for what responsible publication norms in AI research might look like in practice
Privacy, Autonomy, and Personalised Targeting: rethinking how personal data is used
Technological advances are bringing new light to privacy issues and changing the reasons for why privacy is important. These advances have changed not only the kind of personal data that is available to be collected, but also how that personal data can be used by those who have access to it. We are particularly concerned with how information about personal attributes inferred from collected data (such as online behaviour), can be used to tailor messages and services to specific individuals or groups. This kind of ‘personalised targeting’ has the potential to influence individuals’ perceptions, attitudes, and choices in unprecedented ways. In this paper, we argue that because it is becoming easier for companies to use collected data for influence, threats to privacy are increasingly also threats to personal autonomy—an individual’s ability to reflect on and decide freely about their values, actions, and behaviour, and to act on those choices.2 While increasing attention is directed to the ethics of how personal data is collected, we make the case that a new ethics of privacy needs to also think more rigorously about how personal data may be used, and its potential impact on personal autonomy.
We begin by briefly reviewing existing work on the value of privacy and its link to autonomy, before outlining how recent technological advances are changing this relationship by changing the ways that personal information can be used to influence behaviour. We introduce the idea of ‘personalised targeting’, and discuss its implications for autonomy, before finally presenting some considerations for determining when this kind of targeting is acceptable and when it is not. Finally, we conclude with some practical implications for thinking about the ethics of how data is used
Recommended from our members
Why Value Judgements Should Not Be Automated
AI technologies are already being used for a number of purposes in public services, including to automate (parts of) decision processes and to make recommendations and predictions in support of human decisions. Increasing application of AI in public services therefore has the potential to impact several of the Seven Principles of Public Life, presenting new challenges for public servants in upholding those values. We believe AI is particularly likely to impact the principles of Objectivity, Openness, Accountability and Leadership. Algorithmic bias has the potential to threaten the objectivity of public sector decisions, while several forms of opacity in AI systems raise challenges for openness in public services; and, in turn, this impacts the ability of public servants to be accountable and exercise proper leadership.Leverhulme Trust Centre Grant (Leverhulme Centre for the Future of Intelligence
Recommended from our members
Submission of Evidence to The House of Lords Select Committee on Risk Assessment and Risk Planning
In this submission we:
(1) introduce our approach to defining and classifying extreme risks, including global catastrophic risks and existential risks (addressing question 1: "What do you understand the term ‘extreme risk’ to mean?");
(2) highlight two important cross-cutting themes emerging from our research, which are key to grounding risk assessment and management: (a) the systemic nature and political, technological and environmental context of extreme risks, and (b) the relationship between extreme risk and questions of global justice (addressing questions 2: "Are there types of risks to which the UK is particularly vulnerable or for which it is poorly prepared?" and question 6: "How effectively do current ways of characterising risks support evidence-based policy decisions?"); and
(3) summarise our research findings on key tools for appropriately responding to extreme risks, including tools for foresight (addressing question 5: "How can the Government ensure that it identifies and considers as wide a range of risks as possible?"); tools for crafting intervention policies and ensuring their effective implementation (addressing question 7: "How effectively do Departments mitigate risks?"); tools for resilience (addressing question 10: "What challenges are there in developing resilience capability?"); and tools for governance (which go beyond the questions posed by the committee to consider the UK's role in the global governance required to address extreme risks)
Recommended from our members
AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues
AI safety often analyses a risk or safety issue, such as interruptibility, under a particular AI paradigm, such as reinforcement learning. But what is an AI paradigm and how does it affect the understanding and implications of the safety issue? Is AI safety research covering the most representative paradigms and the right combinations of paradigms with safety issues? Will current research directions in AI safety be able to anticipate more capable and powerful systems yet to come? In this paper we analyse these questions, introducing a distinction between two types of paradigms in AI: artefacts and techniques. We then use experimental data of research and media documents from AI Topics, an official publication of the AAAI, to examine how safety research is distributed across artefacts and techniques. We observe that AI safety research is not sufficiently anticipatory, and is heavily weighted towards certain research paradigms. We identify a need for AI safety to be more explicit about the artefacts and techniques for which a particular issue may be applicable, in order to identify gaps and cover a broader range of issues
Real-time monitoring and feedback to improve shared decision-making for surgery (the ALPACA Study): protocol for a mixed-methods study to inform co-development of an inclusive intervention
Introduction High-quality shared decision-making (SDM) is a priority of health services, but only achieved in a minority of surgical consultations. Improving SDM for surgical patients may lead to more effective care and moderate the impact of treatment consequences. There is a need to establish effective ways to achieve sustained and large-scale improvements in SDM for all patients whatever their background. The ALPACA Study aims to develop, pilot and evaluate a decision support intervention that uses real-time feedback of patients’ experience of SDM to change patients’ and healthcare professionals’ decision-making processes before adult elective surgery and to improve patient and health service outcomes.Methods and analysis This protocol outlines a mixed-methods study, involving diverse stakeholders (adult patients, healthcare professionals, members of the community) and three National Health Service (NHS) trusts in England. Detailed methods for the assessment of the feasibility, usability and stakeholder views of implementing a novel system to monitor the SDM process for surgery automatically and in real time are described. The study will measure the SDM process using validated instruments (CollaboRATE, SDM-Q-9, SHARED-Q10) and will conduct semi-structured interviews and focus groups to examine (1) the feasibility of automated data collection, (2) the usability of the novel system and (3) the views of diverse stakeholders to inform the use of the system to improve SDM. Future phases of this work will complete the development and evaluation of the intervention.Ethics and dissemination Ethical approval was granted by the NHS Health Research Authority North West-Liverpool Central Research Ethics Committee (reference: 21/PR/0345). Approval was also granted by North Bristol NHS Trust to undertake quality improvement work (reference: Q80008) overseen by the Consent and SDM Programme Board and reporting to an Executive Assurance Committee.Trial registration number ISRCTN17951423; Pre-results