5,108 research outputs found

    Encoding the Enforcement of Safety Standards into Smart Robots to Harness Their Computing Sophistication and Collaborative Potential:A Legal Risk Assessment for European Union Policymakers

    Get PDF
    Until robots and humans mostly worked in fast-paced and yet separate environments, occupational health and safety (OHS) rules could address workers’ safety largely independently from robotic conduct. This is no longer the case: collaborative robots (cobots) working alongside humans warrant the design of policies ensuring the safety of both humans and robots at once, within shared spaces and upon delivery of cooperative workflows. Within the European Union (EU), the applicable regulatory framework stands at the intersection between international industry standards and legislation at the EU as well as Member State level. Not only do current standards and laws fail to satisfactorily attend to the physical and mental health challenges prompted by human–robot interaction (HRI), but they exhibit important gaps in relation to smart cobots (“SmaCobs”) more specifically. In fact, SmaCobs combine the black-box unforeseeability afforded by machine learning with more general HRI-associated risks, towards increasingly complex, mobile and interconnected operational interfaces and production chains. Against this backdrop, based on productivity and health motivations, we urge the encoding of the enforcement of OHS policies directly into SmaCobs. First, SmaCobs could harness the sophistication of quantum computing to adapt a tangled normative architecture in a responsive manner to the contingent needs of each situation. Second, entrusting them with OHS enforcement vis-à-vis both themselves and humans may paradoxically prove safer as well as more cost-effective than for humans to do so. This scenario raises profound legal, ethical and somewhat philosophical concerns around SmaCobs’ legal personality, the apportionment of liability and algorithmic explainability. The first systematic proposal to tackle such questions is henceforth formulated. For the EU, we propose that this is achieved through a new binding OHS Regulation aimed at the SmaCobs age.<br/

    Ethics in the digital workplace

    Get PDF
    Aquesta publicació s'elabora a partir de les contribucions de cadascú dels membres nacionals que integren la Network of Eurofound Correspondents. Pel cas d'Espanya la contribució ha estat realitzada per l'Alejandro Godino (veure annex Network of Eurofound Correspondents)Adreça alternativa: https://www.eurofound.europa.eu/sites/default/files/ef_publication/field_ef_document/ef22038en.pdfDigitisation and automation technologies, including artificial intelligence (AI), can affect working conditions in a variety of ways and their use in the workplace raises a host of new ethical concerns. Recently, the policy debate surrounding these concerns has become more prominent and has increasingly focused on AI. This report maps relevant European and national policy and regulatory initiatives. It explores the positions and views of social partners in the policy debate on the implications of technological change for work and employment. It also reviews a growing body of research on the topic showing that ethical implications go well beyond legal and compliance questions, extending to issues relating to quality of work. The report aims to provide a good understanding of the ethical implications of digitisation and automation, which is grounded in evidence-based research

    The Responsibility Quantification (ResQu) Model of Human Interaction with Automation

    Full text link
    Intelligent systems and advanced automation are involved in information collection and evaluation, in decision-making and in the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human casual responsibility is particularly important when intelligent autonomous systems can harm people, as with autonomous vehicles or, most notably, with autonomous weapon systems (AWS). Using Information Theory, we develop a responsibility quantification (ResQu) model of human involvement in intelligent automated systems and demonstrate its applications on decisions regarding AWS. The analysis reveals that human comparative responsibility to outcomes is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in intelligent systems and advanced automation. The current model is an initial step in the complex goal to create a comprehensive responsibility model, that will enable quantification of human causal responsibility. It assumes stationarity, full knowledge regarding the characteristic of the human and automation and ignores temporal aspects. Despite these limitations, it can aid in the analysis of systems designs alternatives and policy decisions regarding human responsibility in intelligent systems and advanced automation

    Healthcare Robotics

    Full text link
    Robots have the potential to be a game changer in healthcare: improving health and well-being, filling care gaps, supporting care givers, and aiding health care workers. However, before robots are able to be widely deployed, it is crucial that both the research and industrial communities work together to establish a strong evidence-base for healthcare robotics, and surmount likely adoption barriers. This article presents a broad contextualization of robots in healthcare by identifying key stakeholders, care settings, and tasks; reviewing recent advances in healthcare robotics; and outlining major challenges and opportunities to their adoption.Comment: 8 pages, Communications of the ACM, 201

    Ethical Considerations and Trustworthy Industrial AI Systems

    Get PDF
    The ethics of AI in industrial environments is a new field within applied ethics, with notable dynamics but no well-established issues and no standard overviews. It poses many more challenges than similar consumer and general business applications, and the digital transformation of industrial sectors has brought into the ethical picture even more considerations to address. This relates to integrating AI and autonomous learning machines based on neural networks, genetic algorithms, and agent architectures into manufacturing processes. This article presents the ethical challenges in industrial environments and the implications of developing, implementing, and deploying AI technologies and applications in industrial sectors in terms of complexity, energy demands, and environmental and climate changes. It also gives an overview of the ethical considerations concerning digitising industry and ways of addressing them, such as potential impacts of AI on economic growth and productivity, workforce, digital divide, alignment with trustworthiness, transparency, and fairness. Additionally, potential issues concerning the concentration of AI technology within only a few companies, human-machine relationships, and behavioural and operational misconduct involving AI are examined. Manufacturers, designers, owners, and operators of AI—as part of autonomy and autonomous industrial systems—can be held responsible if harm is caused. Therefore, the need for accountability is also addressed, particularly related to industrial applications with non-functional requirements such as safety, security, reliability, and maintainability supporting the means of AI-based technologies and applications to be auditable via an assessment either internally or by a third party. This requires new standards and certification schemes that allow AI systems to be assessed objectively for compliance and results to be repeatable and reproducible. This article is based on work, findings, and many discussions within the context of the AI4DI project.publishedVersio

    Technology for an intelligent, free-flying robot for crew and equipment retrieval in space

    Get PDF
    Crew rescue and equipment retrieval is a Space Station Freedom requirement. During Freedom's lifetime, there is a high probability that a number of objects will accidently become separated. Members of the crew, replacement units, and key tools are examples. Retrieval of these objects within a short time is essential. Systems engineering studies were conducted to identify system requirements and candidate approaches. One such approach, based on a voice-supervised, intelligent, free-flying robot was selected for further analysis. A ground-based technology demonstration, now in its second phase, was designed to provide an integrated robotic hardware and software testbed supporting design of a space-borne system. The ground system, known as the EVA Retriever, is examining the problem of autonomously planning and executing a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles. The current prototype is an anthropomorphic manipulator unit with dexterous arms and hands attached to a robot body and latched in a manned maneuvering unit. A precision air-bearing floor is used to simulate space. Sensor data include two vision systems and force/proximity/tactile sensors on the hands and arms. Planning for a shuttle file experiment is underway. A set of scenarios and strawman requirements were defined to support conceptual development. Initial design activities are expected to begin in late 1989 with the flight occurring in 1994. The flight hardware and software will be based on lessons learned from both the ground prototype and computer simulations
    corecore