59 research outputs found

    Rejecting acceptance: learning from public dialogue on self-driving vehicles

    Get PDF
    Abstract The investment and excitement surrounding self-driving vehicles are huge. We know from earlier transport innovations that technological transitions can reshape lives, livelihoods, and places in profound ways. There is therefore a case for wide democratic debate, but how should this take place? In this paper, we explore the tensions between democratic experiments and technological ones with a focus on policy for nascent self-driving/automated vehicles. We describe a dominant model of public engagement that imagines increased public awareness leading to acceptance and then adoption of the technology. We explore the flaws in this model, particularly in how it treats members of the public as users rather than citizens and the presumption that the technology is well-defined. Analysing two large public dialogue exercises in which we were involved, our conclusion is that public dialogue can contribute to shifting established ideas about both technologies and the public, but that this reframing demands openness on the part of policymakers and other stakeholders. Rather than seeing public dialogues as individual exercises, it would be better to evaluate the governance of emerging technologies in terms of whether it takes place ‘in dialogue’

    We need a Weizenbaum test for AI

    Get PDF
    Alan Turing introduced his 1950 paper on Computing Machinery and Intelligence with the question “Can machines think?” But rather than engaging in what he regarded as never-ending subjective debate about definitions of intelligence, he instead proposed a thought experiment. His “imitation game” offered a test in which an evaluator held conversations with a human and a computer. If the evaluator failed to tell them apart, the computer could be said to have exhibited artificial intelligence (AI)

    The (co-)production of public uncertainty: UK scientific advice on mobile phone health risks

    Full text link
    UK scientific advice on the possible health risks of mobile phones has embraced (or seems to be embracing) broader engagement with interested non-experts. This paper explains the context of lost credibility that made such a development necessary, and the implications of greater engagement for the construction (and expert control) of “public concern.” I narrate how scientific advice matured from an approach based on compliance with guidelines to a style of “public science” in which issues such as trust and democracy were intertwined with scientific risk assessment. This paper develops existing conceptions of the “public understanding of science” with an explanation based around the co-production of scientific and social order. Using a narrative drawn from a series of in-depth interviews with scientists and policymakers, I explain how expert reformulation of the state of scientific uncertainty within a public controversy reveals constructions of “The Public,” and the desired extent of their engagement. Constructions of the public changed at the same time as a construction of uncertainty as solely an expert concern was molded into a state of politically workable public uncertainty. This paper demonstrates how publics can be constructed as instruments of credible policymaking, and suggests the potential for public alienation if non-experts feel they have not been fairly represented

    Public anticipations of self-driving vehicles in the UK and US

    Get PDF
    Developers of self-driving vehicles (SDVs) work with a particular idea of a possible and desirable future. Members of the public may not share the assumptions on which this is based. In this paper we analyse free-text responses from surveys of UK (n = 4,860) and US (n = 1,890) publics, which ask respondents what springs to mind when they think of SDVs, and why they should or should not be developed. Responses (averaging a total 27 words per participant) tend to foreground safety hopes and, more regularly, concerns. Many respondents present alternative representations of relationships between the technology, other road users and the future. Rather than accepting a dominant approach to public engagement, which seeks to educate members of the public away from these views, we instead propose that these views should be seen as a source of social intelligence, with potential constructive contributions to building better transport systems. Anticipatory governance, if it is to be inclusive, should seek to understand and integrate public views rather than reject them as irrational or mutable

    Towards Principled Responsible Research and Innovation: Employing the Difference Principle in Funding Decisions

    Get PDF
    Responsible Research and Innovation (RRI) has emerged as a science policy framework that attempts to import broad social values into technological innovation processes whilst supporting institutional decision-making under conditions of uncertainty and ambiguity. When looking at RRI from a ‘principled’ perspective, we consider responsibility and justice to be important cornerstones of the framework. The main aim of this article is to suggest a method of realising these principles through the application of a limited Rawlsian Difference Principle in the distribution of public funds for research and innovation. There are reasons why the world's combined innovative capacity has spewed forth iPhones and space shuttles but not yet managed to produce clean energy or universal access to clean water. (Stilgoe 2013, xii) I derive great optimism from empathy's evolutionary antiquity. It makes it a robust trait that will develop in virtually every human being so that society can count on it and try to foster and grow it. It is a human universal. (de Waal 2009, 209) Responsible Research and Innovation (RRI) has emerged as a science policy framework that attempts to import broad social values into technological innovation processes whilst supporting institutional decision-making under conditions of uncertainty and ambiguity. In this respect, RRI re-focuses technological governance from standard debates on risks to discussions about the ethical stewardship of innovation. This is a radical step in Science & Technology (S&T) policy as it lifts the non-quantifiable concept of values into the driving seat of decision-making. The focus of innovation then goes beyond product considerations to include the processes and – importantly – the purposes of innovation (Owen et al. 2013, 34). Shared public values are seen as the cornerstone of the new RRI framework, while market mechanisms and risk-based regulations are of a secondary order. What are the values that could drive RRI? There are different approaches to the identification of public values. They can be located in democratically agreed processes and commitments (such as European Union treaties and policy statements) or they can be developed organically via public engagement processes. Both approaches have advantages and disadvantages. For instance, although constitutional values can be regarded as democratically legitimate, their application to specific technological fields can be difficult or ambiguous (Schroeder and Rerimassie 2015). On the other hand, public engagement can accurately reflect stakeholder values but is not necessarily free from bias and lobbyist agenda setting. We argue that if RRI is to be more successful in resolving policy dilemmas arising from poorly described and uncertain technological impacts, basic universal principles need to be evoked and applied. When looking at RRI from a ‘principled’ perspective, we consider responsibility and justice to be important cornerstones of the framework. One could describe them in the following manner: Research and innovation should be conducted responsibly. Publicly funded research and innovation should be focused fairly on socially beneficial targets. Research and innovation should promote and not hinder social justice. The main aim of this article is to suggest a method of realising these principles through the application of a limited Rawlsian Difference Principle in the distribution of public funds for research and innovation. This paper is in three parts. The first part discusses the above principles and introduces the Rawlsian Difference Principle. The second part identifies how RRI is currently applied by public funding bodies. The third part discusses the operationalisation of the Rawlsian Difference Principle in responsible funding decisions

    Europe's plans for responsible science

    Get PDF
    In the past, European framework programs for research and innovation have included funding for the integration of science and society (1). Collaborative projects have brought together diverse sets of actors to co-create and implement common agendas through citizen science, science communication, public engagement, and responsible research and innovation (RRI) and have built an evidence base about science-society interaction (2, 3). In the proposal for the upcoming Horizon Europe program, however, there is no sustained support for RRI, nor is there a program line dedicated to co-creating knowledge and agendas with civil society (4). These serious oversights must be corrected before the Horizon Europe program is adopted by the Council and the European Parliament

    Assessing responsible innovation training

    Get PDF
    There is broad agreement that one important aspect of responsible innovation (RI) is to provide training on its principles and practices to current and future researchers and innovators, notably including doctoral students. Much less agreement can be observed concerning the question of what this training should consist of, how it should be delivered and how it could be assessed. The increasing institutional embedding of RI leads to calls for the alignment of RI training with training in other subjects. One can therefore observe a push towards the official assessment of RI training, for example in the recent call for proposals for centres for doctoral training by UK Research and Innovation. This editorial article takes its point of departure from the recognition that the RI community will need to react to the call for assessment of RI training. It provides an overview of the background and open questions around RI training and assessment as a background of examples of RI training assessment at doctoral level. There is unlikely to be one right way of assessing RI training across institutions and disciplines, but we expect that the examples provided in this article can help RI scholars and practitioners orient their training and its assessment in ways that are academically viable as well as supportive of the overall aims of RI
    corecore