71 research outputs found

    European regulatory framework for person carrier robots

    Get PDF
    The aim of this paper is to establish the grounds for a future regulatory framework for Person Carrier Robots, which includes legal and ethical aspects. Current industrial standards focus on physical human–robot interaction, i.e. on the prevention of harm. Current robot technology nonetheless challenges other aspects in the legal domain. The main issues comprise privacy, data protection, liability, autonomy, dignity, and ethics. The paper first discusses the need to take into account other interdisciplinary aspects of robot technology to offer complete legal coverage to citizens. As the European Union starts using impact assessment methodology for completing new technologies regulations, a new methodology based on it to approach the insertion of personal care robots will be discussed. Then, after framing the discussion with a use case, analysis of the involved legal challenges will be conducted. Some concrete scenarios will contribute to easing the explanatory analysis

    “I’ll take care of you,” said the robot: Reflecting upon the Legal and Ethical Aspects of the Use and Development of Social Robots for Therapy

    Get PDF
    The insertion of robotic and artificial intelligent (AI) systems in therapeutic settings is accelerating. In this paper, we investigate the legal and ethical challenges of the growing inclusion of social robots in therapy. Typical examples of such systems are Kaspar, Hookie, Pleo, Tito, Robota, Nao, Leka or Keepon. Although recent studies support the adoption of robotic technologies for therapy and education, these technological developments interact socially with children, elderly or disabled, and may raise concerns that range from physical to cognitive safety, including data protection. Research in other fields also suggests that technology has a profound and alerting impact on us and our human nature. This article brings all these findings into the debate on whether the adoption of therapeutic AI and robot technologies are adequate, not only to raise awareness of the possible impacts of this technology but also to help steer the development and use of AI and robot technologies in therapeutic settings in the appropriate direction. Our contribution seeks to provide a thoughtful analysis of some issues concerning the use and development of social robots in therapy, in the hope that this can inform the policy debate and set the scene for further research.Horizon 2020(H2020)707404Article / Letter to editorInstituut voor Metajuridic

    H2020 COVR FSTP LIAISON – D2.6 MS2 COVR presentation describing MS2 results and achievements.

    Get PDF
    Horizon 2020(H2020)779966Effective Protection of Fundamental Rights in a pluralist worl

    H2020 COVR FSTP LIAISON – D2.1 Recommendations for the COVR Toolkit update

    Get PDF
    Horizon 2020(H2020)779966Effective Protection of Fundamental Rights in a pluralist worl

    H2020 COVR FSTP LIAISON – D2.2 Lecture on the ‘future of law’

    Get PDF
    Horizon 2020(H2020)779966Effective Protection of Fundamental Rights in a pluralist worl

    H2020 COVR FSTP LIAISON – D2.3 Academic publication featuring the future of robot governance.

    Get PDF
    Horizon 2020(H2020)779966Effective Protection of Fundamental Rights in a pluralist worl

    H2020 COVR FSTP LIAISON – D2.4 Policy brief for standard and policymakers (EU & NEN)

    Get PDF
    Horizon 2020(H2020)779966Effective Protection of Fundamental Rights in a pluralist worl

    H2020 COVR FSTP LIAISON – D2.5 LIAISON Lessons learned and evaluation report.

    Get PDF
    Horizon 2020(H2020)779966Effective Protection of Fundamental Rights in a pluralist worl

    Expert considerations for the regulation of assistive robotics. A European Robotics Forum Echo

    Get PDF
    Horizon 2020(H2020)707404Effective Protection of Fundamental Rights in a pluralist worl

    Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns

    Get PDF
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.Horizon 2020(H2020)707404Article / Letter to editorInstituut voor Metajuridic
    corecore