46 research outputs found

    Ethically Aligned Design: An empirical evaluation of the RESOLVEDD-strategy in Software and Systems development context

    Full text link
    Use of artificial intelligence (AI) in human contexts calls for ethical considerations for the design and development of AI-based systems. However, little knowledge currently exists on how to provide useful and tangible tools that could help software developers and designers implement ethical considerations into practice. In this paper, we empirically evaluate a method that enables ethically aligned design in a decision-making process. Though this method, titled the RESOLVEDD-strategy, originates from the field of business ethics, it is being applied in other fields as well. We tested the RESOLVEDD-strategy in a multiple case study of five student projects where the use of ethical tools was given as one of the design requirements. A key finding from the study indicates that simply the presence of an ethical tool has an effect on ethical consideration, creating more responsibility even in instances where the use of the tool is not intrinsically motivated.Comment: This is the author's version of the work. The copyright holder's version can be found at https://doi.org/10.1109/SEAA.2019.0001

    ECCOLA -- a Method for Implementing Ethically Aligned AI Systems

    Full text link
    Various recent Artificial Intelligence (AI) system failures, some of which have made the global headlines, have highlighted issues in these systems. These failures have resulted in calls for more ethical AI systems that better take into account their effects on various stakeholders. However, implementing AI ethics into practice is still an on-going challenge. High-level guidelines for doing so exist, devised by governments and private organizations alike, but lack practicality for developers. To address this issue, in this paper, we present a method for implementing AI ethics. The method, ECCOLA, has been iteratively developed using a cyclical action design research approach. The method aims at making the high-level AI ethics principles more practical, making it possible for developers to more easily implement them in practice

    How Do AI Ethics Principles Work? From Process to Product Point of View

    Get PDF
    Discussing the potential negative impacts of AI systems and how to address them has been the core idea of AI ethics more recently. Based on this discussion, various principles summarizing and categorizing ethical issues have been proposed. To bring these principles into practice, it has been common to repackage them into guidelines for AI ethics. The impact of these guidelines seems to remain small, however, and is considered to be a result of a lack of interest in them. To remedy this issue, other ways of implementing these principles have also been proposed. In this paper, we wish to motivate more discussion on the role of the product in AI ethics. While the lack of adoption of these guidelines and their principles is an issue, we argue that there are also issues with the principles themselves. The principles overlap and conflict and commonly include discussion on issues that seem distant from practice. Given the lack of empirical studies in AI ethics, we wish to motivate further empirical studies by highlighting current gaps in the research area.© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.orgfi=vertaisarvioitu|en=peerReviewed

    How Do AI Ethics Principles Work? From Process to Product Point of View

    Get PDF
    Discussing the potential negative impacts of AI systems and how to address them has been the core idea of AI ethics more recently. Based on this discussion, various principles summarizing and categorizing ethical issues have been proposed. To bring these principles into practice, it has been common to repackage them into guidelines for AI ethics. The impact of these guidelines seems to remain small, however, and is considered to be a result of a lack of interest in them. To remedy this issue, other ways of implementing these principles have also been proposed. In this paper, we wish to motivate more discussion on the role of the product in AI ethics. While the lack of adoption of these guidelines and their principles is an issue, we argue that there are also issues with the principles themselves. The principles overlap and conflict and commonly include discussion on issues that seem distant from practice. Given the lack of empirical studies in AI ethics, we wish to motivate further empirical studies by highlighting current gaps in the research area.Peer reviewe

    Continuous Software Engineering Practices in AI/ML Development Past the Narrow Lens of MLOps: Adoption Challenges

    Get PDF
    Background: Continuous software engineering practices are currently considered state of the art in Software Engineering (SE). Recently, this interest in continuous SE has extended to ML system development as well, primarily through MLOps. However, little is known about continuous SE in ML development outside the specific continuous practices present in MLOps. Aim: In this paper, we explored continuous SE in ML development more generally, outside the specific scope of MLOps. We sought to understand what challenges organizations face in adopting all the 13 continuous SE practices identified in existing literature. Method: We conducted a multiple case study of organizations developing ML systems. Data from the cases was collected through thematic interviews. The interview instrument focused on different aspects of continuous SE, as well as the use of relevant tools and methods. Results: We interviewed 8 ML experts from different organizations. Based on the data, we identified various challenges associated with the adoption of continuous SE practices in ML development. Our results are summarized through 7 key findings. Conclusion: The largest challenges we identified seem to stem from communication issues. ML experts seem to continue to work in silos, detached from both the rest of the project and the customers

    Autonomous Agents in Software Development: A Vision Paper

    Full text link
    Large Language Models (LLM) and Generative Pre-trained Transformers (GPT), are reshaping the field of Software Engineering (SE). They enable innovative methods for executing many software engineering tasks, including automated code generation, debugging, maintenance, etc. However, only a limited number of existing works have thoroughly explored the potential of GPT agents in SE. This vision paper inquires about the role of GPT-based agents in SE. Our vision is to leverage the capabilities of multiple GPT agents to contribute to SE tasks and to propose an initial road map for future work. We argue that multiple GPT agents can perform creative and demanding tasks far beyond coding and debugging. GPT agents can also do project planning, requirements engineering, and software design. These can be done through high-level descriptions given by the human developer. We have shown in our initial experimental analysis for simple software (e.g., Snake Game, Tic-Tac-Toe, Notepad) that multiple GPT agents can produce high-quality code and document it carefully. We argue that it shows a promise of unforeseen efficiency and will dramatically reduce lead-times. To this end, we intend to expand our efforts to understand how we can scale these autonomous capabilities further.Comment: 5 pages, 1 figur
    corecore