70 research outputs found

    Inductive Logic Programming for Transparent Alignment with Multiple Moral Values

    Get PDF
    Reinforcement learning is a key paradigm for developing intelligent agents that operate in complex environments and interact with humans. However, researchers face the need to explain and interpret the decisions of these systems, especially when it comes to ensuring their alignment with societal value systems. This paper marks the initial stride in an ongoing research direction by applying an inductive logic programming methodology to explain the policy learned by an RL algorithm in the domain of autonomous driving, thus increasing the transparency of the ethical behaviour of agents

    From POMDP executions to policy specifications

    Get PDF
    Partially Observable Markov Decision Processes (POMDPs) allow modeling systems with uncertain state using probability distributions over states (called beliefs). However, in complex domains, POMDP solvers must explore large belief spaces, which is computationally intractable. One solution is to introduce domain knowledge to drive exploration, in the form of logic specifications. However, defining effective specifications may be challenging even for domain experts. We propose an approach based on inductive logic programming to learn specifications with confidence level from observed POMDP executions. We show that the learning approach converges to robust specifications as the number of examples increases

    Inductive learning of surgical task knowledge from intra-operative expert feedback

    Get PDF
    Knowledge-based and particularly logic-based systems for task planning and execution guarantee trustability and safety of robotic systems interacting with humans. However, domain knowledge is usually incomplete. This paper proposes a novel framework for task knowledge refinement from real-time user feedback, based on inductive logic programming
    • …
    corecore