2,242 research outputs found

    A Robust Decision-Making Framework Based on Collaborative Agents

    Get PDF
    Making decisions under uncertainty is very challenging but necessary as most real-world scenarios are plagued by disturbances that can be generated internally, by the hardware itself, or externally, by the environment. Hence, we propose a general decision-making framework which can be adapted to optimally address the most heterogeneous real-world domains without being significantly affected by undesired disturbances. Our paper presents a multi-agent based structure in which agents are capable of individual decision-making but also interact to perform subsequent, and more robust, collaborative decision-making processes. The complexity of each software agent can be kept quite low without a deterioration of the performance since an intelligent and robust-to-uncertainty decision-making behaviour arises when their locally produced measure of support are shared and exploited collaboratively. We show that by equipping agents with classic computational intelligence techniques, to extract features and generate measure of supports, complex hybrid multi-agent software structures capable of handling uncertainty can be easily designed. The resulting multi-agent systems generated with this approach are based on a two-phases decision-making methodology which first runs parallel local decision making processes to then aggregate the corresponding outputs to improve upon the accuracy of the system. To highlight the potential of this approach, we provided multiple implementations of the general framework and compared them over four different application scenarios. Results are promising and show that having a second collaborative decision-making process is always beneficial

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Iteratively Learn Diverse Strategies with State Distance Information

    Full text link
    In complex reinforcement learning (RL) problems, policies with similar rewards may have substantially different behaviors. It remains a fundamental challenge to optimize rewards while also discovering as many diverse strategies as possible, which can be crucial in many practical applications. Our study examines two design choices for tackling this challenge, i.e., diversity measure and computation framework. First, we find that with existing diversity measures, visually indistinguishable policies can still yield high diversity scores. To accurately capture the behavioral difference, we propose to incorporate the state-space distance information into the diversity measure. In addition, we examine two common computation frameworks for this problem, i.e., population-based training (PBT) and iterative learning (ITR). We show that although PBT is the precise problem formulation, ITR can achieve comparable diversity scores with higher computation efficiency, leading to improved solution quality in practice. Based on our analysis, we further combine ITR with two tractable realizations of the state-distance-based diversity measures and develop a novel diversity-driven RL algorithm, State-based Intrinsic-reward Policy Optimization (SIPO), with provable convergence properties. We empirically examine SIPO across three domains from robot locomotion to multi-agent games. In all of our testing environments, SIPO consistently produces strategically diverse and human-interpretable policies that cannot be discovered by existing baselines
    • …
    corecore