1,323 research outputs found

    Towards Verifiably Ethical Robot Behaviour

    Full text link
    Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional `governor' that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a `consequence engine' that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.Comment: Presented at the 1st International Workshop on AI and Ethics, Sunday 25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the workshop proceedings published by AAA

    Agents and Robots for Reliable Engineered Autonomy

    Get PDF
    This book contains the contributions of the Special Issue entitled "Agents and Robots for Reliable Engineered Autonomy". The Special Issue was based on the successful first edition of the "Workshop on Agents and Robots for reliable Engineered Autonomy" (AREA 2020), co-located with the 24th European Conference on Artificial Intelligence (ECAI 2020). The aim was to bring together researchers from autonomous agents, as well as software engineering and robotics communities, as combining knowledge from these three research areas may lead to innovative approaches that solve complex problems related to the verification and validation of autonomous robotic systems

    Integrating BDI and Reinforcement Learning: the Case Study of Autonomous Driving

    Get PDF
    Recent breakthroughs in machine learning are paving the way to the vision of software 2.0 era, which foresees the replacement of traditional software development with such techniques for many applications. In the context of agent-oriented programming, we believe that mixing together cognitive architectures like the BDI one and learning techniques could trigger new interesting scenarios. In that view, our previous work presents Jason-RL, a framework that integrates BDI agents and Reinforcement Learning (RL) more deeply than what has been already proposed so far in the literature. The framework allows the development of BDI agents having both explicitly programmed plans and plans learned by the agent using RL. The two kinds of plans are seamlessly integrated and can be used without differences. Here, we take autonomous driving as a case study to verify the advantages of the proposed approach and framework. The BDI agent has hard-coded plans that define high-level directions while fine-grained navigation is learned by trial and error. This approach – compared to plain RL – is encouraging as RL struggles in temporally extended planning. We defined and trained an agent able to drive in a track with an intersection, at which it has to choose the correct path to reach the assigned target. A first step towards porting the system in the real-world has been done by building a 1/10 scale racecar prototype which learned how to drive in a simple track

    Effects of the Human Presence among Robots in the ARIAC 2023 Industrial Automation Competition

    Get PDF
    Acknowledgements The authors thank all NIST employees and interns involved in running ARIAC 2023 and, most importantly, to the teams that took part in the competition.Peer reviewe

    Self-Regulation of SMR Power Led to an Enhancement of Functional Connectivity of Somatomotor Cortices in Fibromyalgia Patients

    Get PDF
    Neuroimaging studies have demonstrated that altered activity in somatosensory and motor cortices play a key role in pain chronification. Neurofeedback training of sensorimotor rhythm (SMR) is a tool which allow individuals to self-modulate their brain activity and to produce significant changes over somatomotor brain areas. Several studies have further shown that neurofeedback training may reduce pain and other pain-related symptoms in chronic pain patients. The goal of the present study was to analyze changes in SMR power and brain functional connectivity of the somatosensory and motor cortices elicited by neurofeedback task designed to both synchronize and desynchronize the SMR power over motor and somatosensory areas in fibromyalgia patients. Seventeen patients were randomly assigned to the SMR training (n = 9) or to a sham protocol (n = 8). All participants were trained during 6 sessions, and fMRI and EEG power elicited by synchronization and desynchronization trials were analyzed. In the SMR training group, four patients achieved the objective of SMR modulation in more than 70% of the trials from the second training session (good responders), while five patients performed the task at the chance level (bad responders). Good responders to the neurofeedback training significantly reduced pain and increased both SMR power modulationandfunctionalconnectivityofmotorandsomatosensoryrelatedareasduring the last neurofeedback training session, whereas no changes in brain activity or pain were observed in bad responders or participants in the sham group. In addition, we observed that good responders were characterized by reduced impact of fibromyalgia and pain symptoms, as well as by increased levels of health-related quality of life during the pre-training sessions. In summary, the present study revealed that neurofeedback training of SMR elicited significant brain changes in somatomotor areas leading to a significant reduction of pain in fibromyalgia patients. In this sense, our research provide evidence that neurofeedback training is a promising tool for a better understanding of brain mechanisms involved in pain chronification

    Using Formal Methods for Autonomous Systems: Five Recipes for Formal Verification

    Get PDF
    Formal Methods are mathematically-based techniques for software design and engineering, which enable the unambiguous description of and reasoning about a system's behaviour. Autonomous systems use software to make decisions without human control, are often embedded in a robotic system, are often safety-critical, and are increasingly being introduced into everyday settings. Autonomous systems need robust development and verification methods, but formal methods practitioners are often asked: Why use Formal Methods for Autonomous Systems? To answer this question, this position paper describes five recipes for formally verifying aspects of an autonomous system, collected from the literature. The recipes are examples of how Formal Methods can be an effective tool for the development and verification of autonomous systems. During design, they enable unambiguous description of requirements; in development, formal specifications can be verified against requirements; software components may be synthesised from verified specifications; and behaviour can be monitored at runtime and compared to its original specification. Modern Formal Methods often include highly automated tool support, which enables exhaustive checking of a system's state space. This paper argues that Formal Methods are a powerful tool for the repertoire of development techniques for safe autonomous systems, alongside other robust software engineering techniques.Comment: Accepted at Journal of Risk and Reliabilit
    corecore