8,166 research outputs found

    Selectively Decentralized Q-Learning

    Get PDF
    In this paper, we explore the capability of selectively decentralized Q-learning approach in learning how to optimally stabilize control systems, as compared to the centralized approach. We focus on problems in which the systems are completely unknown except the possible domain knowledge that allow us to decentralize into subsystems. In selective decentralization, we explore all of the possible communication policies among subsystems and use the cumulative gained Q-value as the metric to decide which decentralization scheme should be used for controlling. The results show that the selectively decentralized approach not only stabilizes the system faster but also shows superior converging speed on gained Q-value in different systems with different interconnection strength. In addition, the selectively decentralized converging time does not seem to grow exponentially with the system dimensionality. Practically, this fact implies that the selectively decentralized Q-learning could be used as an alternative approach in large-scale unknown control system, where in theory, the Hamilton-Jacobi-Bellman-equation approach is difficult to derive the close-form solution

    Selectively decentralized reinforcement learning

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The main contributions in this thesis include the selectively decentralized method in solving multi-agent reinforcement learning problems and the discretized Markov-decision-process (MDP) algorithm to compute the sub-optimal learning policy in completely unknown learning and control problems. These contributions tackle several challenges in multi-agent reinforcement learning: the unknown and dynamic nature of the learning environment, the difficulty in computing the closed-form solution of the learning problem, the slow learning performance in large-scale systems, and the questions of how/when/to whom the learning agents should communicate among themselves. Through this thesis, the selectively decentralized method, which evaluates all of the possible communicative strategies, not only increases the learning speed, achieves better learning goals but also could learn the communicative policy for each learning agent. Compared to the other state-of-the-art approaches, this thesis’s contributions offer two advantages. First, the selectively decentralized method could incorporate a wide range of well-known algorithms, including the discretized MDP, in single-agent reinforcement learning; meanwhile, the state-of-the-art approaches usually could be applied for one class of algorithms. Second, the discretized MDP algorithm could compute the sub-optimal learning policy when the environment is described in general nonlinear format; meanwhile, the other state-of-the-art approaches often assume that the environment is in limited format, particularly in feedback-linearization form. This thesis also discusses several alternative approaches for multi-agent learning, including Multidisciplinary Optimization. In addition, this thesis shows how the selectively decentralized method could successfully solve several real-worlds problems, particularly in mechanical and biological systems

    Identification and Optimal Control of Large-Scale Systems Using Selective Decentralization

    Get PDF
    In this paper, we explore the capability of selective decentralization in improving the control performance for unknown large-scale systems using model-based approaches. In selective decentralization, we explore all of the possible communication policies among subsystems and show that with the appropriate switching among the resulting multiple identification models (with corresponding communication policies), such selective decentralization significantly outperforms a centralized identification model when the system is weakly interconnected, and performs at least equivalent to the centralized model when the system is strongly interconnected. To derive the sub-optimal control, our control design include two phases. First, we apply system identification to train the approximation model for the unknown system. Second, we find the suboptimal solution of the Halminton-Jacobi-Bellman (HJB) equation to derive the suboptimal control. In linear systems, the HJB equation transforms to the well-solved Riccati equation with closed-form solution. In nonlinear systems, we discretize the approximation model in order to acquire the control unit by using dynamic programming methods for the resulting Markov Decision Process (MDP). We compare the performance among the selective decentralization, the complete decentralization and the centralization in our two-phase control design. Our results show that selective decentralization outperforms the complete decentralization and the centralization approaches when the systems are completely decoupled or strongly interconnected

    Two-phase Selective Decentralization to Improve Reinforcement Learning Systems with MDP

    Get PDF
    In this paper, we explore the capability of selective decentralization in improving the reinforcement learning performance for unknown systems using model-based approaches. In selective decentralization, we automatically select the best communication policies among agents. Our learning design, which is built on the control system principles, includes two phases. First, we apply system identification to train an approximated model for the unknown systems. Second, we find the suboptimal solution of the Hamilton–Jacobi–Bellman (HJB) equation to derive the suboptimal control. For linear systems, the HJB equation transforms to the well-known Riccati equation with closed-form solution. In nonlinear system, we discretize the approximation model as a Markov Decision Process (MDP) in order to determine the control using dynamic programming algorithms. Since the theoretical foundation of using MDP to control the nonlinear system has not been thoroughly developed, we prove that the control law learned by the discrete-MDP approach is guarantee to stabilize the system, which is the learning goal, given several sufficient conditions. These learning and control techniques could be applied in centralized, completely decentralized and selectively decentralized manner. Our results show that selective decentralization outperforms the complete decentralization and the centralization approaches when the systems are completely decoupled or strongly interconnected

    TRIDEnT: Building Decentralized Incentives for Collaborative Security

    Full text link
    Sophisticated mass attacks, especially when exploiting zero-day vulnerabilities, have the potential to cause destructive damage to organizations and critical infrastructure. To timely detect and contain such attacks, collaboration among the defenders is critical. By correlating real-time detection information (alerts) from multiple sources (collaborative intrusion detection), defenders can detect attacks and take the appropriate defensive measures in time. However, although the technical tools to facilitate collaboration exist, real-world adoption of such collaborative security mechanisms is still underwhelming. This is largely due to a lack of trust and participation incentives for companies and organizations. This paper proposes TRIDEnT, a novel collaborative platform that aims to enable and incentivize parties to exchange network alert data, thus increasing their overall detection capabilities. TRIDEnT allows parties that may be in a competitive relationship, to selectively advertise, sell and acquire security alerts in the form of (near) real-time peer-to-peer streams. To validate the basic principles behind TRIDEnT, we present an intuitive game-theoretic model of alert sharing, that is of independent interest, and show that collaboration is bound to take place infinitely often. Furthermore, to demonstrate the feasibility of our approach, we instantiate our design in a decentralized manner using Ethereum smart contracts and provide a fully functional prototype.Comment: 28 page
    • …
    corecore