171 research outputs found
Towards affective computing that works for everyone
Missing diversity, equity, and inclusion elements in affective computing
datasets directly affect the accuracy and fairness of emotion recognition
algorithms across different groups. A literature review reveals how affective
computing systems may work differently for different groups due to, for
instance, mental health conditions impacting facial expressions and speech or
age-related changes in facial appearance and health. Our work analyzes existing
affective computing datasets and highlights a disconcerting lack of diversity
in current affective computing datasets regarding race, sex/gender, age, and
(mental) health representation. By emphasizing the need for more inclusive
sampling strategies and standardized documentation of demographic factors in
datasets, this paper provides recommendations and calls for greater attention
to inclusivity and consideration of societal consequences in affective
computing research to promote ethical and accurate outcomes in this emerging
field.Comment: 8 pages, 2023 11th International Conference on Affective Computing
and Intelligent Interaction (ACII
European regulatory framework for person carrier robots
The aim of this paper is to establish the grounds for a future regulatory framework for Person Carrier Robots, which includes legal and ethical aspects. Current industrial standards focus on physical human–robot interaction, i.e. on the prevention of harm. Current robot technology nonetheless challenges other aspects in the legal domain. The main issues comprise privacy, data protection, liability, autonomy, dignity, and ethics. The paper first discusses the need to take into account other interdisciplinary aspects of robot technology to offer complete legal coverage to citizens. As the European Union starts using impact assessment methodology for completing new technologies regulations, a new methodology based on it to approach the insertion of personal care robots will be discussed. Then, after framing the discussion with a use case, analysis of the involved legal challenges will be conducted. Some concrete scenarios will contribute to easing the explanatory analysis
Towards Experimental Standardization for AI governance in the EU
The EU has adopted a hybrid governance approach to address the challenges posed by Artificial Intelligence (AI), emphasizing the role of harmonized European standards (HES). Despite advantages in expertise and flexibility, HES processes face legitimacy problems and struggle with epistemic gaps in the context of AI. This article addresses the problems that characterize HES processes by outlining the conceptual need, theoretical basis, and practical application of experimental standardization, which is defined as an ex-ante evaluation method that can be used to test standards for their effects and effectiveness. Experimental standardization is based on theoretical and practical developments in experimental governance, legislation, and innovation. Aligned with ideas and frameworks like Science for Policy and evidence-based policymaking, it enables co-creation between science and policymaking. We apply the proposed concept in the context of HES processes, where we submit that experimental standardization contributes to increasing throughput and output legitimacy, addressing epistemic gaps, and generating new regulatory knowledge
Innovation Letter: Experimenting with Competing Techno-Legal Standards for Robotics
There are legitimacy and discriminatory issues relating to overreliance on private standards to regulate new technologies. On the legitimacy plane, we see that standards shift the centralization of regulation from public democratic processes to private ones that are not subject to the rule of law guarantees reviving the discussion on balancing the legitimacy and effectiveness of techno-legal solutions, which only further aggravates this complex panorama. On the discriminatory plane, incentive issues exacerbate discriminatory outcomes over often marginalized communities. Indeed, standardization bodies do not have incentives to involve and focus on minorities and marginal groups because 'unanimity' of the voting means among those sitting at the table, and there are no accountability mechanisms to turn this around. In this letter, we put up some ideas on how to devise an institutional framework such that standardization bodies invest in anticipating and preventing harm to people's fundamental rights
Implications of the Google’s US 8,996,429 B1 Patent in Cloud Robotics-Based Therapeutic Researches
Intended for being informative to both legal and engineer communities, this chapter raises awareness on the implications of recent patents in the field of human-robot interaction (HRI) studies. Google patented the use of cloud robotics to create robot personality(-ies). The broad claims of the patent could hamper many HRI research projects in the field. One of the possible frustrated research lines is related to robotic therapies because the personalization of the robot accelerates the process of engagement, which is extremely beneficial for robotic cognitive therapies. This chapter presents, therefore, the scientific examination, description, and comparison of the Tufts University CEEO project “Data Analysis and Collection through Robotic Companions and LEGO® Engineering with Children on the Autism Spectrum project” and the US 8,996,429 B1 Patent on the Methods and Systems for Robot Personality Development of Google. Some remarks on ethical implications of the patent will close the chapter and open the discussion to both communities
“I’ll take care of you,” said the robot: Reflecting upon the Legal and Ethical Aspects of the Use and Development of Social Robots for Therapy
The insertion of robotic and artificial intelligent (AI) systems in therapeutic settings is accelerating. In this paper, we investigate the legal and ethical challenges of the growing inclusion of social robots in therapy. Typical examples of such systems are Kaspar, Hookie, Pleo, Tito, Robota, Nao, Leka or Keepon. Although recent studies support the adoption of robotic technologies for therapy and education, these technological developments interact socially with children, elderly or disabled, and may raise concerns that range from physical to cognitive safety, including data protection. Research in other fields also suggests that technology has a profound and alerting impact on us and our human nature. This article brings all these findings into the debate on whether the adoption of therapeutic AI and robot technologies are adequate, not only to raise awareness of the possible impacts of this technology but also to help steer the development and use of AI and robot technologies in therapeutic settings in the appropriate direction. Our contribution seeks to provide a thoughtful analysis of some issues concerning the use and development of social robots in therapy, in the hope that this can inform the policy debate and set the scene for further research.Horizon 2020(H2020)707404Article / Letter to editorInstituut voor Metajuridic
Research in AI has Implications for Society: How do we Respond?
Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.publishedVersio
Healthcare Digitalisation and the Changing Nature of Work and Society
Digital technologies have profound effects on all areas of modern life, including the workplace. Certain forms of digitalisation entail simply exchanging digital files for paper, while more complex instances involve machines performing a wide variety of tasks on behalf of humans. While some are wary of the displacement of humans that occurs when, for example, robots perform tasks previously performed by humans, others argue that robots only perform the tasks that robots should have carried out in the very first place and never by humans. Understanding the impacts of digitalisation in the workplace requires an understanding of the effects of digital technology on the tasks we perform, and these effects are often not foreseeable. In this article, the changing nature of work in the health care sector is used as a case to analyse such change and its implications on three levels: the societal (macro), organisational (meso), and individual level (micro). Analysing these transformations by using a layered approach is helpful for understanding the actual magnitude of the changes that are occurring and creates the foundation for an informed regulatory and societal response. We argue that, while artificial intelligence, big data, and robotics are revolutionary technologies, most of the changes we see involve technological substitution and not infrastructural change. Even though this undermines the assumption that these new technologies constitute a fourth industrial revolution, their effects on the micro and meso level still require both political awareness and proportional regulatory responses.publishedVersio
- …