6 research outputs found
Robotics and the law: exploring the relationship between law and technology adoption challenges in the case of collaborative industrial embodied autonomous systems (Cobots)
The introduction of emerging robotic technology in manufacturing poses different legal issues from their predecessor of industrial robotics and automation where the separation between humans and machines is clearly visible. This new generation of industrial robots would allow for a more lean process and maximisation of efficiency at work. With human-robot collaboration, the advantages are the combination of high levels of accuracy, strength, precision, speed, endurance, and repeatability from the robot and the flexibility, sensitivity, creativity, and cognitive skills from the human.
To paint a picture, this emerging collaborative industrial embodied autonomous system (hereinafter referred to as "Cobot") explored in this thesis is often being referred to as robotic ‘co-workers’ in a popular culture. This notion is particularly important, despite its potential illegitimate claim, it establishes a position where this technology might find itself in the future of industrial workplace being considered as another worker. Although manufacturing industry is no stranger to robotics, this emerging type of industrial robotics poses new challenges; identifying the relevant regulations is a challenge in itself.
This multidisciplinary thesis brings forward an integration of technology law, business management, and human-computer interaction (HCI) studies to explore Cobot adoption challenges and the role of law in addressing the challenges. This thesis approach to a socio-legal investigation of Cobot adoption is twofold: 1. Establishing the challenges through conducting exploratory research 2. Tackling legal challenges through conducting doctrinal research. It is vital for the exploratory research to be the first tier in order to explore concerns from different stakeholders. Thus, we interviewed 15 experts in relevant sectors to Cobot adoption and identified adoption challenges under 10 themes: adoption of new technology, trust, risk, safety, due diligence, regulatory, ethics and social challenges, data & privacy, design, and insurance. In the doctrinal research, we investigated different legal doctrines in addressing safety, liability, data and privacy challenges found in the empirical studies which we concluded that the current regulatory frameworks are sufficient in responding to such challenges.
The novelty of this thesis is the findings from the orchestrating of a study to identify the challenges of Cobot adoption from multi-stakeholders’ perspective and synthesize interdisciplinary material to present an elaborated landscape of Cobot adoption pain points. This thesis provides the breadth of the subject matter which has not been gathered before and the depth of specific regulatory responses to liability, safety, data protection and privacy challenges.
In this thesis, we made 5 contributions:
• Identified a gap in research and establish a new working term
of Cobot (Chapter 2).
• Identified 10 adoption challenges based on empirical studies
(Chapter 3).
• Created a framework for responsible Cobot adoption principles
from multi-stakeholder’s perspectives (Chapter 3).
• Presented a new perspective on Cobot regulations as a symbiotic relationship of safety, liability and data protection (Chapter 4).
• Provided recommendations and future research directions towards responsible Cobot adoption (Chapter 3, Chapter 4, Section 5.1.4 and Section 5.2)
Robotics and the law: exploring the relationship between law and technology adoption challenges in the case of collaborative industrial embodied autonomous systems (Cobots)
The introduction of emerging robotic technology in manufacturing poses different legal issues from their predecessor of industrial robotics and automation where the separation between humans and machines is clearly visible. This new generation of industrial robots would allow for a more lean process and maximisation of efficiency at work. With human-robot collaboration, the advantages are the combination of high levels of accuracy, strength, precision, speed, endurance, and repeatability from the robot and the flexibility, sensitivity, creativity, and cognitive skills from the human.
To paint a picture, this emerging collaborative industrial embodied autonomous system (hereinafter referred to as "Cobot") explored in this thesis is often being referred to as robotic ‘co-workers’ in a popular culture. This notion is particularly important, despite its potential illegitimate claim, it establishes a position where this technology might find itself in the future of industrial workplace being considered as another worker. Although manufacturing industry is no stranger to robotics, this emerging type of industrial robotics poses new challenges; identifying the relevant regulations is a challenge in itself.
This multidisciplinary thesis brings forward an integration of technology law, business management, and human-computer interaction (HCI) studies to explore Cobot adoption challenges and the role of law in addressing the challenges. This thesis approach to a socio-legal investigation of Cobot adoption is twofold: 1. Establishing the challenges through conducting exploratory research 2. Tackling legal challenges through conducting doctrinal research. It is vital for the exploratory research to be the first tier in order to explore concerns from different stakeholders. Thus, we interviewed 15 experts in relevant sectors to Cobot adoption and identified adoption challenges under 10 themes: adoption of new technology, trust, risk, safety, due diligence, regulatory, ethics and social challenges, data & privacy, design, and insurance. In the doctrinal research, we investigated different legal doctrines in addressing safety, liability, data and privacy challenges found in the empirical studies which we concluded that the current regulatory frameworks are sufficient in responding to such challenges.
The novelty of this thesis is the findings from the orchestrating of a study to identify the challenges of Cobot adoption from multi-stakeholders’ perspective and synthesize interdisciplinary material to present an elaborated landscape of Cobot adoption pain points. This thesis provides the breadth of the subject matter which has not been gathered before and the depth of specific regulatory responses to liability, safety, data protection and privacy challenges.
In this thesis, we made 5 contributions:
• Identified a gap in research and establish a new working term
of Cobot (Chapter 2).
• Identified 10 adoption challenges based on empirical studies
(Chapter 3).
• Created a framework for responsible Cobot adoption principles
from multi-stakeholder’s perspectives (Chapter 3).
• Presented a new perspective on Cobot regulations as a symbiotic relationship of safety, liability and data protection (Chapter 4).
• Provided recommendations and future research directions towards responsible Cobot adoption (Chapter 3, Chapter 4, Section 5.1.4 and Section 5.2)
Workplace 4.0: exploring the implications of technology adoption in digital manufacturing on a sustainable workforce
As part of the Industry 4.0 movement, the introduction of digital manufacturing technologies (DMTs) poses various concerns, particularly the impact of technology adoption on the workforce. In consideration of adoption challenges and implications, various studies explore the topic from the perspective of safety, socio-economic impact, technical readiness, and risk assessment. This paper presents mixed methods research to explore the challenges and acceptance factors of the adoption of human-robot collaboration (HRC) applications and other digital manufacturing technologies from the perspective of different stakeholders: from manufacturing employees at all levels to legal experts to consultants to ethicists. We found that some of the prominent challenges and tensions inherent in technology adoption are job displacement, employee’s acceptance, trust, and privacy. This paper argues that it is crucial to understand the wider human factors implications to better strategize technology adoption; therefore, it recommends interventions targeted at individual employees and at the organisational level. This paper contributes to the roadmap of responsible DMT and HRC implementation to encourage a sustainable workforce in digital manufacturing.Engineering and Physical Sciences Research Council (EPSRC): EP/L015463/1 and EP/R032718/1
Responsible domestic robotics:Exploring ethical implications of robots in the home
Purpose: The vision of robotics in the home promises increased convenience, comfort, companionship, and greater security for users. The robot industry risks causing harm to users, being rejected by society at large, or being regulated in overly prescriptive ways if robots are not developed in a socially responsible manner. The purpose of this paper is to explore some of the challenges and requirements for designing responsible domestic robots.Design/methodology/approach: The paper examines definitions of robotics and the current commercial state of the art. In particular it considers the emerging technological trends, such as smart homes, that are already embedding computational agents in the fabric of everyday life. The paper then explores the role of values in design, aligning with human computer interaction and considers the importance of the home as a deployment setting for robots. The paper examines what responsibility in robotics means and draws lessons from past home information technologies. An exploratory pilot survey was conducted to understand user concerns about different aspects of domestic robots such as form, privacy and trust. The paper provides these findings, married with literature analysis from across technology law, computer ethics and computer science.Findings: By drawing together both empirical observations and conceptual analysis, this paper concludes that user centric design is needed to create responsible domestic robotics in the future.Originality/value: This multidisciplinary paper provides conceptual and empirical research from different domains to unpack the challenges of designing responsible domestic robotics
A survey of lay people’s willingness to generate legal advice using Large Language Models (LLMs)
As of November 2022-following the release of OpenAI's ChatGPT-the general public's awareness of generative AI, and specifically Large Language Models (LLMs) has increased. LLMs such as ChatGPT now have the capability to generate text indistinguishable from human authored text, which comes with numerous risks. In this paper, we investigate public perception and willingness to use LLMs as a substitute for legal advice from legal professionals. Our findings show that while few people have used it for this purpose, the willingness to rely on LLMs in the future is growing. Interestingly, this depends on the specific area of law, and while LLMs are perceived to be highly valuable in relation to topics such as tenancy and tax law, they seem to be perceived as less valuable in contexts such as divorce or civil disputes.</p
Objection overruled! Lay people can distinguish large language models from lawyers, but still favour advice from an LLM
Large Language Models (LLMs) are seemingly infiltrating every domain, and the legal context is no exception. In this paper, we present the results of three experiments (total N=288) that investigated lay people's willingness to act upon, and their ability to discriminate between, LLM- and lawyer-generated legal advice. In Experiment 1, participants judged their willingness to act on legal advice when the source of the advice was either known or unknown. When the advice source was unknown, participants indicated that they were significantly more willing to act on the LLM-generated advice. This result was replicated in Experiment 2. Intriguingly, despite participants indicating higher willingness to act on LLM-generated advice in Experiments 1 and 2, participants discriminated between the LLM- and lawyer-generated texts significantly above chance-level in Experiment 3. Lastly, we discuss potential explanations and risks of our findings, limitations and future work, and the importance of language complexity and real-world comparability