8,407 research outputs found

    Time Complexity of Decentralized Fixed-Mode Verification

    Get PDF
    Given an interconnected system, this note is concerned with the time complexity of verifying whether an unrepeated mode of the system is a decentralized fixed mode (DFM). It is shown that checking the decentralized fixedness of any distinct mode is tantamount to testing the strong connectivity of a digraph formed based on the system. It is subsequently proved that the time complexity of this decision problem using the proposed approach is the same as the complexity of matrix multiplication. This work concludes that the identification of distinct DFMs (by means of a deterministic algorithm, rather than a randomized one) is computationally very easy, although the existing algorithms for solving this problem would wrongly imply that it is cumbersome. This note provides not only a complexity analysis, but also an efficient algorithm for tackling the underlying problem

    Correlation Decay in Random Decision Networks

    Full text link
    We consider a decision network on an undirected graph in which each node corresponds to a decision variable, and each node and edge of the graph is associated with a reward function whose value depends only on the variables of the corresponding nodes. The goal is to construct a decision vector which maximizes the total reward. This decision problem encompasses a variety of models, including maximum-likelihood inference in graphical models (Markov Random Fields), combinatorial optimization on graphs, economic team theory and statistical physics. The network is endowed with a probabilistic structure in which costs are sampled from a distribution. Our aim is to identify sufficient conditions to guarantee average-case polynomiality of the underlying optimization problem. We construct a new decentralized algorithm called Cavity Expansion and establish its theoretical performance for a variety of models. Specifically, for certain classes of models we prove that our algorithm is able to find near optimal solutions with high probability in a decentralized way. The success of the algorithm is based on the network exhibiting a correlation decay (long-range independence) property. Our results have the following surprising implications in the area of average case complexity of algorithms. Finding the largest independent (stable) set of a graph is a well known NP-hard optimization problem for which no polynomial time approximation scheme is possible even for graphs with largest connectivity equal to three, unless P=NP. We show that the closely related maximum weighted independent set problem for the same class of graphs admits a PTAS when the weights are i.i.d. with the exponential distribution. Namely, randomization of the reward function turns an NP-hard problem into a tractable one

    Learning scalable and transferable multi-robot/machine sequential assignment planning via graph embedding

    Full text link
    Can the success of reinforcement learning methods for simple combinatorial optimization problems be extended to multi-robot sequential assignment planning? In addition to the challenge of achieving near-optimal performance in large problems, transferability to an unseen number of robots and tasks is another key challenge for real-world applications. In this paper, we suggest a method that achieves the first success in both challenges for robot/machine scheduling problems. Our method comprises of three components. First, we show a robot scheduling problem can be expressed as a random probabilistic graphical model (PGM). We develop a mean-field inference method for random PGM and use it for Q-function inference. Second, we show that transferability can be achieved by carefully designing two-step sequential encoding of problem state. Third, we resolve the computational scalability issue of fitted Q-iteration by suggesting a heuristic auction-based Q-iteration fitting method enabled by transferability we achieved. We apply our method to discrete-time, discrete space problems (Multi-Robot Reward Collection (MRRC)) and scalably achieve 97% optimality with transferability. This optimality is maintained under stochastic contexts. By extending our method to continuous time, continuous space formulation, we claim to be the first learning-based method with scalable performance among multi-machine scheduling problems; our method scalability achieves comparable performance to popular metaheuristics in Identical parallel machine scheduling (IPMS) problems

    Navigability is a Robust Property

    Full text link
    The Small World phenomenon has inspired researchers across a number of fields. A breakthrough in its understanding was made by Kleinberg who introduced Rank Based Augmentation (RBA): add to each vertex independently an arc to a random destination selected from a carefully crafted probability distribution. Kleinberg proved that RBA makes many networks navigable, i.e., it allows greedy routing to successfully deliver messages between any two vertices in a polylogarithmic number of steps. We prove that navigability is an inherent property of many random networks, arising without coordination, or even independence assumptions
    • …
    corecore