4 research outputs found

    Towards an MLOps Architecture for XAI in Industrial Applications

    Full text link
    Machine learning (ML) has become a popular tool in the industrial sector as it helps to improve operations, increase efficiency, and reduce costs. However, deploying and managing ML models in production environments can be complex. This is where Machine Learning Operations (MLOps) comes in. MLOps aims to streamline this deployment and management process. One of the remaining MLOps challenges is the need for explanations. These explanations are essential for understanding how ML models reason, which is key to trust and acceptance. Better identification of errors and improved model accuracy are only two resulting advantages. An often neglected fact is that deployed models are bypassed in practice when accuracy and especially explainability do not meet user expectations. We developed a novel MLOps software architecture to address the challenge of integrating explanations and feedback capabilities into the ML development and deployment processes. In the project EXPLAIN, our architecture is implemented in a series of industrial use cases. The proposed MLOps software architecture has several advantages. It provides an efficient way to manage ML models in production environments. Further, it allows for integrating explanations into the development and deployment processes

    Embracing AWKWARD! A Hybrid Architecture for Adjustable Socially-Aware Agents

    No full text
    This dissertation presents AWKWARD: a hybrid architecture for the development of socially aware agents in Multi-Agent Systems (MAS). AWKWARD bridges Artificial Intelligence (AI) methods for their individual and combined strengths; Behaviour Oriented Design (BOD) is used to develop reactive planning agents, the OperA framework is used to modeland validate agent behaviour as per social norms, and Reinforcement Learning (RL) is used to optimise plan structures that induce desirable social outcomes. In concert, OperA and BOD help AWKWARD agents achieve real-time adjustment of reactive plans in response to social obligations. As systems scale, however, reactive plans become challenging to optimise by hand. In this work, AWKWARD’s extensibility is demonstrated to tackle this problem. The RL module is presented as an additional AI method that enables the automatic restructuring of reactive plans. A sample implementation of AWKWARD is developed in DOTA2—a game where success is heavily dependent on social interactions. The results gathered demonstrate the social outcome achieved from plan adjustments in real time, in both experiments of norm enforcement using OperA, and experiments of norm reinforcement using the extended RL module. Each sub-component, including the reactive planner itself, is a decision-making entity that dictates agent behaviour at various stages of the system’s life cycle. The level of decision-making control of each module is adjusted at various stages, making AWKWARD a system with Variable Autonomy (VA). The concept of VA isdiscussed as one that can aid in maintaining human control over a system. However, as with any VA system, challenges of transparency and transfer of control, amongst others, present themselves. Suggestions for tackling these challenges are presented as next steps for the maturation of AWKWARD as a platform, and the DOTA2 implementation as a research and educational platform

    Embracing AWKWARD! A Hybrid Architecture for Adjustable Socially-Aware Agents

    No full text
    This dissertation presents AWKWARD: a hybrid architecture for the development of socially aware agents in Multi-Agent Systems (MAS). AWKWARD bridges Artificial Intelligence (AI) methods for their individual and combined strengths; Behaviour Oriented Design (BOD) is used to develop reactive planning agents, the OperA framework is used to modeland validate agent behaviour as per social norms, and Reinforcement Learning (RL) is used to optimise plan structures that induce desirable social outcomes. In concert, OperA and BOD help AWKWARD agents achieve real-time adjustment of reactive plans in response to social obligations. As systems scale, however, reactive plans become challenging to optimise by hand. In this work, AWKWARD’s extensibility is demonstrated to tackle this problem. The RL module is presented as an additional AI method that enables the automatic restructuring of reactive plans. A sample implementation of AWKWARD is developed in DOTA2—a game where success is heavily dependent on social interactions. The results gathered demonstrate the social outcome achieved from plan adjustments in real time, in both experiments of norm enforcement using OperA, and experiments of norm reinforcement using the extended RL module. Each sub-component, including the reactive planner itself, is a decision-making entity that dictates agent behaviour at various stages of the system’s life cycle. The level of decision-making control of each module is adjusted at various stages, making AWKWARD a system with Variable Autonomy (VA). The concept of VA isdiscussed as one that can aid in maintaining human control over a system. However, as with any VA system, challenges of transparency and transfer of control, amongst others, present themselves. Suggestions for tackling these challenges are presented as next steps for the maturation of AWKWARD as a platform, and the DOTA2 implementation as a research and educational platform

    Let Me Take Over : Variable Autonomy for Meaningful Human Control

    Get PDF
    As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency
    corecore