22 research outputs found

    What Makes a Good Plan? An Efficient Planning Approach to Control Diffusion Processes in Networks

    Full text link
    In this paper, we analyze the quality of a large class of simple dynamic resource allocation (DRA) strategies which we name priority planning. Their aim is to control an undesired diffusion process by distributing resources to the contagious nodes of the network according to a predefined priority-order. In our analysis, we reduce the DRA problem to the linear arrangement of the nodes of the network. Under this perspective, we shed light on the role of a fundamental characteristic of this arrangement, the maximum cutwidth, for assessing the quality of any priority planning strategy. Our theoretical analysis validates the role of the maximum cutwidth by deriving bounds for the extinction time of the diffusion process. Finally, using the results of our analysis, we propose a novel and efficient DRA strategy, called Maximum Cutwidth Minimization, that outperforms other competing strategies in our simulations.Comment: 18 pages, 3 figure

    Breaking the Log Barrier: a Novel Universal Restart Strategy for Faster Las Vegas Algorithms

    Full text link
    Let A\mathcal{A} be a Las Vegas algorithm, i.e. an algorithm whose running time TT is a random variable drawn according to a certain probability distribution pp. In 1993, Luby, Sinclair and Zuckerman [LSZ93] proved that a simple universal restart strategy can, for any probability distribution pp, provide an algorithm executing A\mathcal{A} and whose expected running time is O(plogp)O(\ell^\star_p\log\ell^\star_p), where p=Θ(infq(0,1]Qp(q)/q)\ell^\star_p=\Theta\left(\inf_{q\in (0,1]}Q_p(q)/q\right) is the minimum expected running time achievable with full prior knowledge of the probability distribution pp, and Qp(q)Q_p(q) is the qq-quantile of pp. Moreover, the authors showed that the logarithmic term could not be removed for universal restart strategies and was, in a certain sense, optimal. In this work, we show that, quite surprisingly, the logarithmic term can be replaced by a smaller quantity, thus reducing the expected running time in practical settings of interest. More precisely, we propose a novel restart strategy that executes A\mathcal{A} and whose expected running time is O(infq(0,1]Qp(q)qψ(logQp(q),log(1/q)))O\big(\inf_{q\in (0,1]}\frac{Q_p(q)}{q}\,\psi\big(\log Q_p(q),\,\log (1/q)\big)\big) where ψ(a,b)=1+min{a+b,alog2a,blog2b}\psi(a,b)=1+\min\left\{a+b,a\log^2 a,\,b\log^2 b\right\}. This quantity is, up to a multiplicative factor, better than: 1) the universal restart strategy of [LSZ93], 2) any qq-quantile of pp for q(0,1]q\in(0,1], 3) the original algorithm, and 4) any quantity of the form ϕ1(E[ϕ(T)])\phi^{-1}(\mathbb{E}[\phi(T)]) for a large class of concave functions ϕ\phi. The latter extends the recent restart strategy of [Zam22] achieving O(eE[ln(T)])O\left(e^{\mathbb{E}[\ln(T)]}\right), and can be thought of as algorithmic reverse Jensen's inequalities. Finally, we show that the behavior of tϕ(t)ϕ(t)\frac{t\phi''(t)}{\phi'(t)} at infinity controls the existence of reverse Jensen's inequalities by providing a necessary and a sufficient condition for these inequalities to hold.Comment: 13 pages, 0 figure

    Multivariate Hawkes Processes for Large-scale Inference

    Full text link
    In this paper, we present a framework for fitting multivariate Hawkes processes for large-scale problems both in the number of events in the observed history nn and the number of event types dd (i.e. dimensions). The proposed Low-Rank Hawkes Process (LRHP) framework introduces a low-rank approximation of the kernel matrix that allows to perform the nonparametric learning of the d2d^2 triggering kernels using at most O(ndr2)O(ndr^2) operations, where rr is the rank of the approximation (rd,nr \ll d,n). This comes as a major improvement to the existing state-of-the-art inference algorithms that are in O(nd2)O(nd^2). Furthermore, the low-rank approximation allows LRHP to learn representative patterns of interaction between event types, which may be valuable for the analysis of such complex processes in real world datasets. The efficiency and scalability of our approach is illustrated with numerical experiments on simulated as well as real datasets.Comment: 16 pages, 5 figure

    Optimal Algorithms for Non-Smooth Distributed Optimization in Networks

    Full text link
    In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in O(1/t)O(1/\sqrt{t}), the structure of the communication network only impacts a second-order term in O(1/t)O(1/t), where tt is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a d1/4d^{1/4} multiplicative factor of the optimal convergence rate, where dd is the underlying dimension.Comment: 17 page
    corecore