49,309 research outputs found

    Generation of Policy-Level Explanations for Reinforcement Learning

    Full text link
    Though reinforcement learning has greatly benefited from the incorporation of neural networks, the inability to verify the correctness of such systems limits their use. Current work in explainable deep learning focuses on explaining only a single decision in terms of input features, making it unsuitable for explaining a sequence of decisions. To address this need, we introduce Abstracted Policy Graphs, which are Markov chains of abstract states. This representation concisely summarizes a policy so that individual decisions can be explained in the context of expected future transitions. Additionally, we propose a method to generate these Abstracted Policy Graphs for deterministic policies given a learned value function and a set of observed transitions, potentially off-policy transitions used during training. Since no restrictions are placed on how the value function is generated, our method is compatible with many existing reinforcement learning methods. We prove that the worst-case time complexity of our method is quadratic in the number of features and linear in the number of provided transitions, O(∣F∣2∣tr_samples∣)O(|F|^2 |tr\_samples|). By applying our method to a family of domains, we show that our method scales well in practice and produces Abstracted Policy Graphs which reliably capture relationships within these domains.Comment: Accepted to Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (2019

    Learning Scheduling Algorithms for Data Processing Clusters

    Full text link
    Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load

    The Algorithmic Complexity of Bondage and Reinforcement Problems in bipartite graphs

    Full text link
    Let G=(V,E)G=(V,E) be a graph. A subset D⊆VD\subseteq V is a dominating set if every vertex not in DD is adjacent to a vertex in DD. The domination number of GG, denoted by γ(G)\gamma(G), is the smallest cardinality of a dominating set of GG. The bondage number of a nonempty graph GG is the smallest number of edges whose removal from GG results in a graph with domination number larger than γ(G)\gamma(G). The reinforcement number of GG is the smallest number of edges whose addition to GG results in a graph with smaller domination number than γ(G)\gamma(G). In 2012, Hu and Xu proved that the decision problems for the bondage, the total bondage, the reinforcement and the total reinforcement numbers are all NP-hard in general graphs. In this paper, we improve these results to bipartite graphs.Comment: 13 pages, 4 figures. arXiv admin note: substantial text overlap with arXiv:1109.1657; and text overlap with arXiv:1204.4010 by other author
    • …
    corecore