147,685 research outputs found

    Single failure resiliency in greedy routing

    Get PDF
    Using greedy routing, network nodes forward packets towards neighbors which are closer to their destination. This approach makes greedy routers significantly more memory-efficient than traditional IP-routers using longest-prefix matching. Greedy embeddings map network nodes to coordinates, such that greedy routing always leads to the destination. Prior works showed that using a spanning tree of the network topology, greedy embeddings can be found in different metric spaces for any graph. However, a single link/node failure might affect the greedy embedding and causes the packets to reach a dead end. In order to cope with network failures, existing greedy methods require large resources and cause significant loss in the quality of the routing (stretch loss). We propose efficient recovery techniques which require very limited resources with minor effect on the stretch. As the proposed techniques are protection, the switch-over takes place very fast. Low overhead, simplicity and scalability of the methods make them suitable for large-scale networks. The proposed schemes are validated on large topologies with properties similar to the Internet. The performances of the schemes are compared with an existing alternative referred as gravity pressure routing

    High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods

    Full text link
    In this paper we consider the task of estimating the non-zero pattern of the sparse inverse covariance matrix of a zero-mean Gaussian random vector from a set of iid samples. Note that this is also equivalent to recovering the underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We present two novel greedy approaches to solving this problem. The first estimates the non-zero covariates of the overall inverse covariance matrix using a series of global forward and backward greedy steps. The second estimates the neighborhood of each node in the graph separately, again using greedy forward and backward steps, and combines the intermediate neighborhoods to form an overall estimate. The principal contribution of this paper is a rigorous analysis of the sparsistency, or consistency in recovering the sparsity pattern of the inverse covariance matrix. Surprisingly, we show that both the local and global greedy methods learn the full structure of the model with high probability given just O(dlog(p))O(d\log(p)) samples, which is a \emph{significant} improvement over state of the art 1\ell_1-regularized Gaussian MLE (Graphical Lasso) that requires O(d2log(p))O(d^2\log(p)) samples. Moreover, the restricted eigenvalue and smoothness conditions imposed by our greedy methods are much weaker than the strong irrepresentable conditions required by the 1\ell_1-regularization based methods. We corroborate our results with extensive simulations and examples, comparing our local and global greedy methods to the 1\ell_1-regularized Gaussian MLE as well as the Neighborhood Greedy method to that of nodewise 1\ell_1-regularized linear regression (Neighborhood Lasso).Comment: Accepted to AI STAT 2012 for Oral Presentatio

    Lazier Than Lazy Greedy

    Full text link
    Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice? In this paper, we develop the first linear-time algorithm for maximizing a general monotone submodular function subject to a cardinality constraint. We show that our randomized algorithm, STOCHASTIC-GREEDY, can achieve a (11/eε)(1-1/e-\varepsilon) approximation guarantee, in expectation, to the optimum solution in time linear in the size of the data and independent of the cardinality constraint. We empirically demonstrate the effectiveness of our algorithm on submodular functions arising in data summarization, including training large-scale kernel methods, exemplar-based clustering, and sensor placement. We observe that STOCHASTIC-GREEDY practically achieves the same utility value as lazy greedy but runs much faster. More surprisingly, we observe that in many practical scenarios STOCHASTIC-GREEDY does not evaluate the whole fraction of data points even once and still achieves indistinguishable results compared to lazy greedy.Comment: In Proc. Conference on Artificial Intelligence (AAAI), 201
    corecore