813,491 research outputs found

    Algorithms For Extracting Timeliness Graphs

    Get PDF
    We consider asynchronous message-passing systems in which some links are timely and processes may crash. Each run defines a timeliness graph among correct processes: (p; q) is an edge of the timeliness graph if the link from p to q is timely (that is, there is bound on communication delays from p to q). The main goal of this paper is to approximate this timeliness graph by graphs having some properties (such as being trees, rings, ...). Given a family S of graphs, for runs such that the timeliness graph contains at least one graph in S then using an extraction algorithm, each correct process has to converge to the same graph in S that is, in a precise sense, an approximation of the timeliness graph of the run. For example, if the timeliness graph contains a ring, then using an extraction algorithm, all correct processes eventually converge to the same ring and in this ring all nodes will be correct processes and all links will be timely. We first present a general extraction algorithm and then a more specific extraction algorithm that is communication efficient (i.e., eventually all the messages of the extraction algorithm use only links of the extracted graph)

    Graphical method for analyzing digital computer efficiency

    Get PDF
    Analysis method utilizes graph-theoretic approach for evaluating computation cost and makes logical distinction between linear graph of a computation and linear graph of a program. It applies equally well to other processes which depend on quatitative edge nomenclature and precedence relationships between edges

    Bayesian Semi-supervised Learning with Graph Gaussian Processes

    Get PDF
    We propose a data-efficient Gaussian process-based Bayesian approach to the semi-supervised learning problem on graphs. The proposed model shows extremely competitive performance when compared to the state-of-the-art graph neural networks on semi-supervised learning benchmark experiments, and outperforms the neural networks in active learning experiments where labels are scarce. Furthermore, the model does not require a validation data set for early stopping to control over-fitting. Our model can be viewed as an instance of empirical distribution regression weighted locally by network connectivity. We further motivate the intuitive construction of the model with a Bayesian linear model interpretation where the node features are filtered by an operator related to the graph Laplacian. The method can be easily implemented by adapting off-the-shelf scalable variational inference algorithms for Gaussian processes.Comment: To appear in NIPS 2018 Fixed an error in Figure 2. The previous arxiv version contains two identical sub-figure
    • …
    corecore