1,196 research outputs found
Governing the Median Estate: hyper-truth and post-truth in the regulation of digital innovations
This chapter focusses on governance of digital innovation, making two claims: first, that post - truth is not a mere surface phenomenon, but rather grounded in the general production of knowledge and ignorance. Second, it connects post truth discourse to the âhyper - truthâ status of digital innovation agendas. The significant issue is one much commented on in STS (and related) scholarship, namely the intentional blurring and merger of boundaries (hybridisation) in technoscientific and digital innovation. The chapter points to two cases where such hybridisation becomes problematic: the design of privacy into ICT technologies, and a debate over personhood for robots. Both are âpost truthâ insofar as they intentionally blur the normative with the factual and technological. Hence hybridisation itself has become part of mainstream legitimation and cannot therefore be relied upon by scholars as a critical corrective to idealised and simplified legitimations based in science or law. The authors propose a concept of âboundary fusionâ, according to which sources of authority are merged together, as an extension on traditional ideas of âboundary workâ, according to which authority is made by separation of sources, such as science and law.publishedVersio
Make Way for the Robots! Humanâ and MachineâCentricity in Constituting a European PublicâPrivate Partnership
This article is an analytic register of recent European efforts in the making of âautonomousâ robots to address what is imagined as Europeâs societal challenges. The paper describes how an emerging techno-epistemic network stretches across industry, science, policy and law to legitimize and enact a robotics innovation agenda. Roadmap is the main metaphor and organizing tool in working across the disciplines and sectors, and in aligning these heterogeneous actors with a machine-centric vision along a path to make way for ânew kindsâ of robots. We describe what happens as this industry-dominated project docks in a publicâprivate partnership with pan-European institutions and a legislative initiative on robolaw. Emphasizing the co-production of robotics and European innovation politics, we observe how well-known uncertainties and scholarly debates about machine capabilities and humanâmachine configurations, are unexpectedly played out in legal scholarship and institutions as a controversy and a significant problem for human-centered legal frameworks. European robotics are indeed driving an increase in speculative ethics and a new-found weight of possible futures in legislative practice.publishedVersio
Legal framework for small autonomous agricultural robots
Legal structures may form barriers to, or enablers of, adoption of precision agriculture management with small autonomous agricultural robots. This article develops a conceptual regulatory framework for small autonomous agricultural robots, from a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural roboticists quickly and easily up and running within the law. The article examines the liability framework, or rather lack of it, for agricultural robotics in EU, and their transpositions to UK law, as a case study illustrating general international legal concepts and issues. It examines how the law may provide mitigating effects on the liability regime, and how contracts can be developed between agents within it to enable smooth operation. It covers other legal aspects of operation such as the use of shared communications resources and privacy in the reuse of robot-collected data. Where there are some grey areas in current law, it argues that new proposals could be developed to reform these to promote further innovation and investment in agricultural robots
Robotics and the Lessons of Cyberlaw
Two decades of analysis have produced a rich set of insights as to how the law should apply to the Internetâs peculiar characteristics. But, in the meantime, technology has not stood still. The same public and private institutions that developed the Internet, from the armed forces to search engines, have initiated a significant shift toward developing robotics and artificial intelligence.
This Article is the first to examine what the introduction of a new, equally transformative technology means for cyberlaw and policy. Robotics has a different set of essential qualities than the Internet and accordingly will raise distinct legal issues. Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument.
Robotics will prove âexceptionalâ in the sense of occasioning systematic changes to law, institutions, and the legal academy. But we will not be writing on a clean slate: many of the core insights and methods of cyberlaw will prove crucial in integrating robotics and perhaps whatever technology follows
Recommended from our members
Towards a legal definition of machine intelligence: the argument for artificial personhood in the age of deep learning.
The paper dissects the intricacies of Automated Decision Making (ADM) and urges for refining the current legal definition of AI when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. ADM relies upon a plethora of algorithmic approaches and has already found a wide range of applications in marketing automation, social networks, computational neuroscience, robotics, and other fields. Our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm; this can take various shapes and thus yield different answers to key issues regarding agency. The paper offers a fresh look at the concept of "Machine Intelligence", which exposes certain vulnerabilities in its current legal interpretation. Most importantly, it further helps us to explore whether the argument for "artificial personhood" holds any water. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of Human - Machine interaction and can thus serve as a point of reference for outlining distinct rights and obligations of the programmer and the consumer: driverless cars are used as a case study to explore the several layers of human and machine interaction. These different degrees of automation reflect various levels of complexities in the underlying algorithms, and pose very interesting questions in terms of agency and dynamic tasks carried out by software agents. Part 2 further discusses the intricate nature of the underlying algorithms and artificial neural networks (ANN) that implement them and considers how one can interpret and utilize observed patterns in acquired data. Is "artificial personhood" a sufficient legal response to highly sophisticated machine learning techniques employed in decision making that successfully emulate or even enhance human cognitive abilities
Autonomous Corporate Personhood
Several states have recently changed their business organization law to accommodate autonomous businessesâbusinesses operated entirely through computer code. A variety of international civil society groups are also actively developing new frameworksâ and a model lawâfor enabling decentralized, autonomous businesses to achieve a corporate or corporate-like status that bestows legal personhood. Meanwhile, various jurisdictions, including the European Union, have considered whether and to what extent artificial intelligence (AI) more broadly should be endowed with personhood to respond to AIâs increasing presence in society. Despite the fairly obvious overlap between the two sets of inquiries, the legal and policy discussions between the two only rarely overlap. As a result of this failure to communicate, both areas of personhood theory fail to account for the important role that socio-technical and socio-legal context plays in law and policy development. This Article fills the gap by investigating the limits of artificial rights at the intersection of corporations and artificial intelligence. Specifically, this Article argues that building a comprehensive legal approach to artificial rightsârights enjoyed by artificial people, whether corporate entity, machine, or otherwiseârequires approaching the issue through a systems lens to ensure that the legal system adequately considers the varied socio-technical contexts in which artificial people exist.
To make these claims, this Article begins by establishing a terminology baseline, and emphasizing the importance of viewing AI as part of a socio-technical system. Part I then concludes by reviewing the existing ecosystem of autonomous corporations. Parts II and III then examine the existing debates around artificially intelligent persons and corporate personhood, arguing that the socio-legal needs driving artificial personhood debates in both contexts include: protecting the rights of natural people, upholding social values, and creating a fiction for legal convenience. Parts II and III also explore the extent to which the theories from either set of literature fits the reality of autonomous businesses, illuminating gaps and using them to demonstrate that the law must consider the socio-technical context of AI systems and the socio-legal complexity of corporations to decide how autonomous businesses will interact with the world. Ultimately, the Article identifies and leverages links between both areas of legal personhood to demonstrate the Articleâs core claim: developing law for artificial systems in any context should use the systems nature of the technical artifact to tie its legal treatment directly to the systemâs socio-technical reality
Making robotic autonomy through science and law?
This document reports on the Epinet workshop on the making of robot autonomy, held in Utrecht 16-17 February 2014. The workshop was part of a case study focused on developments in this area, in particular, autonomy for assistive robots in care and companionship roles. Our participants were of relevant expertise and professional experience: law and ethics, academic and industry robotics, vision assessment and science and technology studies (STS). The workshop was intended to explore the expectations of robot autonomy amongst our participants, against a backdrop of recent policy views and research trends that are openly pushing an agenda of "smarter", more dynamic and more autonomous systems (e.g. European Commission, 2008; EUROP, 2009; Robot Companions for Citizens, 2012). Robotics development is intimately connected with visions of robot autonomy, however, as a practical achievement, robot autonomy remains till this day part real, part promise. Ideas of robot autonomy are nevertheless powerful societally and culturally-specific visions, even if the very notion of "autonomy" is vague and inconsistent in recent accounts of future robots. These accounts still come together with considerable force in directing the efforts of researchers and experimenters, for example, in establishing funding priorities. They have a function in strategic planning for future developments. Accounts of future robots are also informing and shaping the efforts of legislators, ethicists and lawyers. To that effect, one can say that there is an official vision of future robots, a yardstick with which everyone implicated in robotics development has to measure their expectations
Recommended from our members
A study into the layers of automated decision-making: emergent normative and legal aspects of deep learning
The paper dissects the intricacies of automated decision making (ADM) and urges for refining the current legal definition of artificial intelligence (AI) when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. Whilst coming up with a toolkit to measure algorithmic determination in automated/semi-automated tasks might be proven to be a tedious task for the legislator, our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm. The paper offers a fresh look at AI, which exposes certain vulnerabilities in its current legal interpretation. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of humanâmachine interaction. Part 2 further discusses the intricate nature of AI algorithms and considers how one can utilize observed patterns in acquired data. Finally, the paper explores the legal challenges that result from user empowerment and the requirement for data transparency
- âŠ