2,667 research outputs found

    Ethically Aligned Design: An empirical evaluation of the RESOLVEDD-strategy in Software and Systems development context

    Full text link
    Use of artificial intelligence (AI) in human contexts calls for ethical considerations for the design and development of AI-based systems. However, little knowledge currently exists on how to provide useful and tangible tools that could help software developers and designers implement ethical considerations into practice. In this paper, we empirically evaluate a method that enables ethically aligned design in a decision-making process. Though this method, titled the RESOLVEDD-strategy, originates from the field of business ethics, it is being applied in other fields as well. We tested the RESOLVEDD-strategy in a multiple case study of five student projects where the use of ethical tools was given as one of the design requirements. A key finding from the study indicates that simply the presence of an ethical tool has an effect on ethical consideration, creating more responsibility even in instances where the use of the tool is not intrinsically motivated.Comment: This is the author's version of the work. The copyright holder's version can be found at https://doi.org/10.1109/SEAA.2019.0001

    Ethical governance is essential to building trust in robotics and artificial intelligence systems

    Get PDF
    © 2018 The Author(s) Published by the Royal Society. All rights reserved. This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap-which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement-as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'

    Time for AI (Ethics) maturity model is now

    Get PDF
    Publisher Copyright: Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of Artificial Intelligence (AI). Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. We want to voice out a call for action for the development of a maturity model for AI software. We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems.Peer reviewe

    Reasonable AI and Other Creatures. What Role for AI Standards in Liability Litigation?

    Get PDF
    Standards play a vital role in supporting policies and legislation of the European Union. The regulation of artificial intelligence (AI) makes no exception as made clear by the AI Act proposal. Particularly, Articles 40 and 41 defer to harmonised standards and common specifications the concrete definition of safety and trustworthiness requirements, including risk management, data quality, transparency, human oversight, accuracy, robustness, and cybersecurity. Besides, other types of standards and professional norms are also relevant to the governance of AI. These include European non-harmonised standards, international and national standards, professional codes and guidelines, and uncodified best practices. This contribution casts light on the relationship between standards and private law in the context of liability litigation for damage caused by AI systems. Despite literature’s commitment to the issue of liability for AI, the role of standardisation in this regard has been largely overlooked hitherto. Furthermore, while much research has been undertaken on the regulation of AI, comparatively little has dealt with its standardisation. This paper aims to fill this gap. Building on previous scholarship, the contribution demonstrates that standards and professional norms are substantially normative in spite of their private and voluntary nature. In fact, they shape private relationships due to normative and economic reasons. Indeed, these private norms enter the courtrooms by explicit or implicit incorporation into contracts as well as by informing general clauses such as reasonableness and duty of care. Therefore, they represent the yardstick against which professionals’ performance and conduct are evaluated. Hence, a link between standards, safety, and liability can be established. Against this backdrop, the role of AI standards in private law is assessed. To set the scene, the article provides a bird’s-eye view of AI standardisation. The European AI standardisation initiative is analysed along with other institutional and non-institutional instruments. Finally, it is argued that AI standards contribute to defining the duty of care expected from developers and professional operators of AI systems. Hence, they might represent a valuable instrument for tackling the challenges posed by AI technology to extracontractual and contractual liability

    Toward an Understanding of Responsible Artificial Intelligence Practices

    Get PDF
    Artificial Intelligence (AI) is influencing all aspects of human and business activities nowadays. Although potential benefits emerged from AI technologies have been widely discussed in many current literature, there is an urgently need to understand how AI can be designed to operate responsibly and act in a manner meeting stakeholders’ expectations and applicable regulations. We seek to fill the gap by exploring the practices of responsible AI and identifying the potential benefits when implementing responsible AI practices. In this study, 10 responsible AI cases were selected from different industries to better understand the use of responsible AI in practices. Four responsible AI practices are identified, including governance, ethically design solutions, risk control and training and education and five strategies for firms who are considering to adopt responsible AI practices are recommended

    Operationalizing Transparency and Explainability in Artificial Intelligence through Standardization

    Get PDF
    As artificial intelligence (AI) has developed, it has spread to almost every aspect of our society, from electric toothbrushes and telephone applications to automated transportation and military use. As AI becomes more ubiquitous, its importance and impact on our society grow continuously. With the pursuit and development of more efficient and accurate artificial intelligence applications, AI systems have evolved into so-called “black box” models, where the operation and decision-making have become immensely complex and difficult to understand, even for experts. As AI is increasingly applied in more critical and sensitive areas, such as healthcare, for instance in support of diagnoses, the lack of transparency and explainability of these complex models and their decision-making has become a problem. If there is no understandable argumentation backing up the results produced by the system, its use is questionable or even ethically impossible in such areas. Furthermore, these AI systems may be misused or behave in very unexpected and potentially harmful ways. Issues related to the governance of AI systems are thus more important than ever before. Standards provide one way to implement AI governance and promote the transparency and explainability of AI systems. This study sets out to examine how the role of standardization in promoting AI transparency and explainability is perceived from an organizational perspective and what kind of AI transparency and explainability needs are identified among different organizational actors. In addition, efforts will be made to identify possible drivers and barriers to the adoption of AI transparency and explainability standards. The research has been carried out by interviewing representatives from a total of 11 different Finnish organizations working in the field of AI. The data gathered from the interviews has been analyzed using the Gioia method. Based on this analysis, five different roles for standards were identified regarding the promotion of explainability and transparency in AI: 1. Facilitator, 2. Validator, 3. Supporter, 4. Business enhancer, and 5. Necessary evil. Furthermore, the identified AI transparency and explainability needs are composed of the needs for ensuring general acceptability of AI and risk management needs. Finally, the identified drivers for adopting AI transparency and explainability standards comprise the requirements of the operating environment, business facilitating drivers, and business improvement drivers, whereas the barriers consist of the lack of resources, lack of knowledge and know-how, downsides of standardization, and incompatibility of standardization and AI. In addition, the results showed that the implementation of possible standards for AI transparency and explainability is largely driven by binding legislation and financial incentives rather than ethical drivers. Furthermore, building trust in AI is seen as the ultimate purpose of transparency and explainability and its standardization. This dissertation provides an empirical basis for future research regarding the need for AI standardization, standards adoption, and AI transparency and explainability from an organizational perspective.Tekoäly on kehittyessään levinnyt lähes kaikille yhteiskuntamme osa-alueille aina sähköhammasharjoista ja puhelimen sovelluksista liikenteeseen ja maanpuolustukseen. Laajan leviämisen seurauksena sen merkitys ja vaikutus yhteiskunnassamme on kasvanut jatkuvasti sekä jatkaa yhä kasvamista. Tehokkaampien ja tarkempien tekoälysovellutusten tavoittelun ja kehityksen myötä AI-sovellutuksista on kehittynyt niin sanottuja ”black box” -malleja, joiden toiminta ja päätöksenteko on hyvin monimutkaista ja vaikeasti ymmärrettävää jopa alan asiantuntijoille. Kun tekoälyä aletaan kehityksen myötä yhä enenevissä määrin soveltamaan myös kriittisemmillä ja sensitiivisemmillä osa-alueilla kuten esimerkiksi terveydenhuollossa diagnoosien tukena, ongelmaksi nousee näiden monimutkaisten mallien avoimuuden puute ja saatujen tulosten läpinäkyvyys ja selitettävyys. Jos tekoälyn tuottamalle tulokselle ei löydy perusteluita, sen käyttö on hyvin hataralla pohjalla ja eettisesti jopa mahdotonta tällaisilla aloilla. Samaan aikaan tekoälyä voidaan käyttää väärin tai se voi käyttäytyä hyvinkin odottamattomilla ja mahdollisesti haitallisilla tavoilla. Tekoälyjärjestelmien hallintaan liittyvät kysymykset ovat siten tärkeämpiä kuin koskaan ennen. Standardit tarjoavat yhden keinon toteuttaa tekoälyn hallintaa ja edistää tekoälyjärjestelmien läpinäkyvyyttä ja selitettävyyttä. Tässä tutkimuksessa pyritään tutkimaan miten standardoinnin rooli tekoälyn läpinäkyvyyden ja selitettävyyden edistämisessä koetaan organisaatioiden näkökulmasta ja millaisia tekoälyn läpinäkyvyyden ja selitettävyyden tarpeita eri sidosryhmien keskuudessa tunnistetaan. Lisäksi pyritään selvittämään mitkä ovat mahdollisia ajureita ja esteitä tekoälyn läpinäkyvyys- ja selitettävyysstandardien käyttöönotolle. Tutkimus on toteutettu haastattelemalla yhteensä 11 eri tekoälyn parissa työskentelevän suomalaisen organisaation edustajia. Haastatteluista saatu aineisto on analysoitu Gioia-menetelmää hyödyntäen. Tämän analyysin perusteella tunnistettiin yhteensä viisi eri standardien roolia tekoälyn selitettävyyden ja läpinäkyvyyden edistämisessä: 1. Fasilitaattori, 2. Validaattori, 3. Tukija, 4. Liiketoiminnan edistäjä ja 5. Välttämätön paha. Lisäksi analyysin perusteella tunnistetut tekoälyn läpinäkyvyys- ja selitettävyystarpeet koostuvat tekoälyn yleisen hyväksynnän saavuttamisen tarpeista ja riskienhallintatarpeista. Tunnistetut tekoälyn läpinäkyvyys- ja selitettävyysstandardien käyttöönoton ajurit sisältävät toimintaympäristön vaatimukset, liiketoimintaa edistävät ajurit ja liiketoiminnan parantamisen ajurit, kun taas tunnistettuja esteitä ovat resurssien puute, tiedon ja taitotiedon puute sekä standardoinnissa tunnistetut huonot puolet, sekä standardoinnin ja tekoälyn yhteensopimattomuus. Lisäksi tulokset osoittivat, että mahdollisten tekoälyn läpinäkyvyys- ja selitettävyysstandardien käyttöönotto on eettisen ajureiden sijaan pitkälti pakottavan lainsäädännön ja taloudellisten kannustimien johdattelemaa. Tekoälyn läpinäkyvyyden ja selitettävyyden sekä sen standardisoinnin perimmäisenä tarkoituksena nähdään olevan luottamuksen saavuttaminen tekoälyä kohtaan. Tämä tutkielma tarjoaa empiirisen tietoperustan tulevalle tekoälyn standardoinnin, standardien käyttöönoton ja tekoälyn läpinäkyvyyden ja selitettävyyden tarpeiden tutkimukselle organisaationäkökulmasta

    RELEVANCE OF ETHICAL GUIDELINES FOR ARTIFICIAL INTELLIGENCE – A SURVEY AND EVALUATION

    Get PDF
    Ethics for artificial intelligence (AI) is a topic of growing practical relevance. Many people seem to believe that AI could render jobs obsolete in the future. Others wonder who is in charge for the actions of AI systems they encounter. Providing and prioritizing ethical guidelines for AI is therefore an important measure for providing safeguards and increasing the acceptance of this technology. The aim of this research is to survey ethical guidelines for the handling of AI in the ICT industry and evaluate them with respect to their relevance. For this goal, first, an overview of AI ethics is derived from the literature, with a focus on classical Western ethical theories. From this, a candidate set of important ethical guidelines is developed. Then, qualitative interviews with experts are conducted for in-depth feedback and ranking of these guidelines. Furthermore, an online survey is performed in order to more representatively weight the ethical guidelines in terms of importance among a broader audience. Combining both studies, a prioritization matrix is created using the weights from the experts and the survey participants in order to synthesize their votes. Based on this, a ranked catalogue of ethical guidelines for AI is created, and novel avenues for research on AI ethics are presented

    Ethical Control of Unmanned Systems: lifesaving/lethal scenarios for naval operations

    Get PDF
    Prepared for: Raytheon Missiles & Defense under NCRADA-NPS-19-0227This research in Ethical Control of Unmanned Systems applies precepts of Network Optional Warfare (NOW) to develop a three-step Mission Execution Ontology (MEO) methodology for validating, simulating, and implementing mission orders for unmanned systems. First, mission orders are represented in ontologies that are understandable by humans and readable by machines. Next, the MEO is validated and tested for logical coherence using Semantic Web standards. The validated MEO is refined for implementation in simulation and visualization. This process is iterated until the MEO is ready for implementation. This methodology is applied to four Naval scenarios in order of increasing challenges that the operational environment and the adversary impose on the Human-Machine Team. The extent of challenge to Ethical Control in the scenarios is used to refine the MEO for the unmanned system. The research also considers Data-Centric Security and blockchain distributed ledger as enabling technologies for Ethical Control. Data-Centric Security is a combination of structured messaging, efficient compression, digital signature, and document encryption, in correct order, for round-trip messaging. Blockchain distributed ledger has potential to further add integrity measures for aggregated message sets, confirming receipt/response/sequencing without undetected message loss. When implemented, these technologies together form the end-to-end data security that ensures mutual trust and command authority in real-world operational environments—despite the potential presence of interfering network conditions, intermittent gaps, or potential opponent intercept. A coherent Ethical Control approach to command and control of unmanned systems is thus feasible. Therefore, this research concludes that maintaining human control of unmanned systems at long ranges of time-duration and distance, in denied, degraded, and deceptive environments, is possible through well-defined mission orders and data security technologies. Finally, as the human role remains essential in Ethical Control of unmanned systems, this research recommends the development of an unmanned system qualification process for Naval operations, as well as additional research prioritized based on urgency and impact.Raytheon Missiles & DefenseRaytheon Missiles & Defense (RMD).Approved for public release; distribution is unlimited
    corecore