32,708 research outputs found

    Self-Stabilizing Token Distribution with Constant-Space for Trees

    Get PDF
    Self-stabilizing and silent distributed algorithms for token distribution in rooted tree networks are given. Initially, each process of a graph holds at most l tokens. Our goal is to distribute the tokens in the whole network so that every process holds exactly k tokens. In the initial configuration, the total number of tokens in the network may not be equal to nk where n is the number of processes in the network. The root process is given the ability to create a new token or remove a token from the network. We aim to minimize the convergence time, the number of token moves, and the space complexity. A self-stabilizing token distribution algorithm that converges within O(n l) asynchronous rounds and needs Theta(nh epsilon) redundant (or unnecessary) token moves is given, where epsilon = min(k,l-k) and h is the height of the tree network. Two novel ideas to reduce the number of redundant token moves are presented. One reduces the number of redundant token moves to O(nh) without any additional costs while the other reduces the number of redundant token moves to O(n), but increases the convergence time to O(nh l). All algorithms given have constant memory at each process and each link register

    Disjoint Dominating Sets with a Perfect Matching

    Full text link
    In this paper, we consider dominating sets DD and D′D' such that DD and D′D' are disjoint and there exists a perfect matching between them. Let DDm(G)DD_{\textrm{m}}(G) denote the cardinality of smallest such sets D,D′D, D' in GG (provided they exist, otherwise DDm(G)=∞DD_{\textrm{m}}(G) = \infty). This concept was introduced in [Klostermeyer et al., Theory and Application of Graphs, 2017] in the context of studying a certain graph protection problem. We characterize the trees TT for which DDm(T)DD_{\textrm{m}}(T) equals a certain graph protection parameter and for which DDm(T)=α(T)DD_{\textrm{m}}(T) = \alpha(T), where α(G)\alpha(G) is the independence number of GG. We also further study this parameter in graph products, e.g., by giving bounds for grid graphs, and in graphs of small independence number

    Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog

    Full text link
    A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, all learned without any human supervision! In this paper, using a Task and Tell reference game between two agents as a testbed, we present a sequence of 'negative' results culminating in a 'positive' one -- showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge 'naturally', despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.Comment: 9 pages, 7 figures, 2 tables, accepted at EMNLP 2017 as short pape

    How do we remember the past in randomised strategies?

    Full text link
    Graph games of infinite length are a natural model for open reactive processes: one player represents the controller, trying to ensure a given specification, and the other represents a hostile environment. The evolution of the system depends on the decisions of both players, supplemented by chance. In this work, we focus on the notion of randomised strategy. More specifically, we show that three natural definitions may lead to very different results: in the most general cases, an almost-surely winning situation may become almost-surely losing if the player is only allowed to use a weaker notion of strategy. In more reasonable settings, translations exist, but they require infinite memory, even in simple cases. Finally, some traditional problems becomes undecidable for the strongest type of strategies

    Robust Subgraph Generation Improves Abstract Meaning Representation Parsing

    Full text link
    The Abstract Meaning Representation (AMR) is a representation for open-domain rich semantics, with potential use in fields like event extraction and machine translation. Node generation, typically done using a simple dictionary lookup, is currently an important limiting factor in AMR parsing. We propose a small set of actions that derive AMR subgraphs by transformations on spans of text, which allows for more robust learning of this stage. Our set of construction actions generalize better than the previous approach, and can be learned with a simple classifier. We improve on the previous state-of-the-art result for AMR parsing, boosting end-to-end performance by 3 F1_1 on both the LDC2013E117 and LDC2014T12 datasets.Comment: To appear in ACL 201

    SourcererCC: Scaling Code Clone Detection to Big Code

    Full text link
    Despite a decade of active research, there is a marked lack in clone detectors that scale to very large repositories of source code, in particular for detecting near-miss clones where significant editing activities may take place in the cloned code. We present SourcererCC, a token-based clone detector that targets three clone types, and exploits an index to achieve scalability to large inter-project repositories using a standard workstation. SourcererCC uses an optimized inverted-index to quickly query the potential clones of a given code block. Filtering heuristics based on token ordering are used to significantly reduce the size of the index, the number of code-block comparisons needed to detect the clones, as well as the number of required token-comparisons needed to judge a potential clone. We evaluate the scalability, execution time, recall and precision of SourcererCC, and compare it to four publicly available and state-of-the-art tools. To measure recall, we use two recent benchmarks, (1) a large benchmark of real clones, BigCloneBench, and (2) a Mutation/Injection-based framework of thousands of fine-grained artificial clones. We find SourcererCC has both high recall and precision, and is able to scale to a large inter-project repository (250MLOC) using a standard workstation.Comment: Accepted for publication at ICSE'16 (preprint, unrevised
    • …
    corecore