44,780 research outputs found

    RELEVANCE OF ETHICAL GUIDELINES FOR ARTIFICIAL INTELLIGENCE – A SURVEY AND EVALUATION

    Get PDF
    Ethics for artificial intelligence (AI) is a topic of growing practical relevance. Many people seem to believe that AI could render jobs obsolete in the future. Others wonder who is in charge for the actions of AI systems they encounter. Providing and prioritizing ethical guidelines for AI is therefore an important measure for providing safeguards and increasing the acceptance of this technology. The aim of this research is to survey ethical guidelines for the handling of AI in the ICT industry and evaluate them with respect to their relevance. For this goal, first, an overview of AI ethics is derived from the literature, with a focus on classical Western ethical theories. From this, a candidate set of important ethical guidelines is developed. Then, qualitative interviews with experts are conducted for in-depth feedback and ranking of these guidelines. Furthermore, an online survey is performed in order to more representatively weight the ethical guidelines in terms of importance among a broader audience. Combining both studies, a prioritization matrix is created using the weights from the experts and the survey participants in order to synthesize their votes. Based on this, a ranked catalogue of ethical guidelines for AI is created, and novel avenues for research on AI ethics are presented

    The CARESSES study protocol: testing and evaluating culturally competent socially assistive robots among older adults residing in long term care homes through a controlled experimental trial

    Get PDF
    Background : This article describes the design of an intervention study that focuses on whether and to what degree culturally competent social robots can improve health and well-being related outcomes among older adults residing long-term care homes. The trial forms the final stage of the international, multidisciplinary CARESSES project aimed at designing, developing and evaluating culturally competent robots that can assist older people according to the culture of the individual they are supporting. The importance of cultural competence has been demonstrated in previous nursing literature to be key towards improving health outcomes among patients. Method : This study employed a mixed-method, single-blind, parallel-group controlled before-and-after experimental trial design that took place in England and Japan. It aimed to recruit 45 residents of long-term care homes aged ≥65 years, possess sufficient cognitive and physical health and who self-identify with the English, Indian or Japanese culture (n = 15 each). Participants were allocated to either the experimental group, control group 1 or control group 2 (all n = 15). Those allocated to the experimental group or control group 1 received a Pepper robot programmed with the CARESSES culturally competent artificial intelligence (experimental group) or a limited version of this software (control group 1) for 18 h across 2 weeks. Participants in control group 2 did not receive a robot and continued to receive care as usual. Participants could also nominate their informal carer(s) to participate. Quantitative data collection occurred at baseline, after 1 week of use, and after 2 weeks of use with the latter time-point also including qualitative semi-structured interviews that explored their experience and perceptions further. Quantitative outcomes of interest included perceptions of robotic cultural competence, health-related quality of life, loneliness, user satisfaction, attitudes towards robots and caregiver burden. Discussion : This trial adds to the current preliminary and limited pool of evidence regarding the benefits of socially assistive robots for older adults which to date indicates considerable potential for improving outcomes. It is the first to assess whether and to what extent cultural competence carries importance in generating improvements to well-being

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn

    Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation

    Full text link
    Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system's life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial for the present and future of our society.Comment: 30 pages, 5 figures, under second revie

    Developing a Reference Model for Artificial Intelligence Management

    Get PDF
    The adoption and diffusion of Artificial Intelligence (AI) in organizations are significantly influenced by many sociotechnical factors such as people, processes, or regulations. However, previous research has mostly focused on a reactive description of individual influencing factors and lacks an overarching perspective that enables active management of an organization’s various AI capabilities. Therefore, this paper provides a first overarching perspective by identifying all relevant activities for the management of AI in literature and grouping them into eight different fields of action. These fields of action are then evaluated by practitioners and combined into a cross-industry reference model for AI Management. While this reference model is the first of its kind and is already making a valuable contribution to the emerging field of AI Management in Information Systems Research, further insights are expected from the future refinement and application of the model
    • …
    corecore