38,082 research outputs found

    Improving AI systems' dependability by utilizing historical knowledge

    Get PDF
    A Turing Test is a promising way to validate AI systems which usually have no way to proof correctness. However, human experts (validators) are often too busy to participate in it and sometimes have different opinions per person as well as per validation session. To cope with these and increase the validation dependability, a Validation Knowledge Base (VKB) in Turing Test - like validation is proposed. The VKB is constructed and maintained across various validation sessions. Primary benefits are (1) decreasing validators' workload, (2) refining the methodology itself, e.g. selecting dependable validators using V KB, and (3) increasing AI systems' dependabilities through dependable validation, e.g. support to identify optimal solutions. Finally, Validation Experts Software Agents (VESA) are introduced to further break limitations of human validator's dependability. Each VESA is a software agent corresponding to a particular human validator. This suggests the ability to systematically "construct" human-like validators by keeping personal validation knowledge per corresponding validator. This will bring a new dimension towards dependable AI systems

    Common Representation of Information Flows for Dynamic Coalitions

    Full text link
    We propose a formal foundation for reasoning about access control policies within a Dynamic Coalition, defining an abstraction over existing access control models and providing mechanisms for translation of those models into information-flow domain. The abstracted information-flow domain model, called a Common Representation, can then be used for defining a way to control the evolution of Dynamic Coalitions with respect to information flow

    Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers

    Get PDF
    The success of the human-robot co-worker team in a flexible manufacturing environment where robots learn from demonstration heavily relies on the correct and safe operation of the robot. How this can be achieved is a challenge that requires addressing both technical as well as human-centric research questions. In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines. We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and indicate opportunities to integrate safety considerations into algorithms "by design". Finally, from a human-centric perspective, we stipulate that, to achieve high levels of safety and ultimately trust, the robotic co-worker must meet the innate expectations of the humans it works with. It is our aim to stimulate a discussion focused on the safety aspects of human-in-the-loop robotics, and to foster multidisciplinary collaboration to address the research challenges identified
    corecore