65,002 research outputs found

    Dynamic Approximate All-Pairs Shortest Paths: Breaking the O(mn) Barrier and Derandomization

    Full text link
    We study dynamic (1+ϵ)(1+\epsilon)-approximation algorithms for the all-pairs shortest paths problem in unweighted undirected nn-node mm-edge graphs under edge deletions. The fastest algorithm for this problem is a randomized algorithm with a total update time of O~(mn/ϵ)\tilde O(mn/\epsilon) and constant query time by Roditty and Zwick [FOCS 2004]. The fastest deterministic algorithm is from a 1981 paper by Even and Shiloach [JACM 1981]; it has a total update time of O(mn2)O(mn^2) and constant query time. We improve these results as follows: (1) We present an algorithm with a total update time of O~(n5/2/ϵ)\tilde O(n^{5/2}/\epsilon) and constant query time that has an additive error of 22 in addition to the 1+ϵ1+\epsilon multiplicative error. This beats the previous O~(mn/ϵ)\tilde O(mn/\epsilon) time when m=Ω(n3/2)m=\Omega(n^{3/2}). Note that the additive error is unavoidable since, even in the static case, an O(n3δ)O(n^{3-\delta})-time (a so-called truly subcubic) combinatorial algorithm with 1+ϵ1+\epsilon multiplicative error cannot have an additive error less than 2ϵ2-\epsilon, unless we make a major breakthrough for Boolean matrix multiplication [Dor et al. FOCS 1996] and many other long-standing problems [Vassilevska Williams and Williams FOCS 2010]. The algorithm can also be turned into a (2+ϵ)(2+\epsilon)-approximation algorithm (without an additive error) with the same time guarantees, improving the recent (3+ϵ)(3+\epsilon)-approximation algorithm with O~(n5/2+O(log(1/ϵ)/logn))\tilde O(n^{5/2+O(\sqrt{\log{(1/\epsilon)}/\log n})}) running time of Bernstein and Roditty [SODA 2011] in terms of both approximation and time guarantees. (2) We present a deterministic algorithm with a total update time of O~(mn/ϵ)\tilde O(mn/\epsilon) and a query time of O(loglogn)O(\log\log n). The algorithm has a multiplicative error of 1+ϵ1+\epsilon and gives the first improved deterministic algorithm since 1981. It also answers an open question raised by Bernstein [STOC 2013].Comment: A preliminary version was presented at the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS 2013

    Constraint-Based Heuristic On-line Test Generation from Non-deterministic I/O EFSMs

    Full text link
    We are investigating on-line model-based test generation from non-deterministic output-observable Input/Output Extended Finite State Machine (I/O EFSM) models of Systems Under Test (SUTs). We propose a novel constraint-based heuristic approach (Heuristic Reactive Planning Tester (xRPT)) for on-line conformance testing non-deterministic SUTs. An indicative feature of xRPT is the capability of making reasonable decisions for achieving the test goals in the on-line testing process by using the results of off-line bounded static reachability analysis based on the SUT model and test goal specification. We present xRPT in detail and make performance comparison with other existing search strategies and approaches on examples with varying complexity.Comment: In Proceedings MBT 2012, arXiv:1202.582

    New Bounds for Randomized List Update in the Paid Exchange Model

    Get PDF
    We study the fundamental list update problem in the paid exchange model P^d. This cost model was introduced by Manasse, McGeoch and Sleator [M.S. Manasse et al., 1988] and Reingold, Westbrook and Sleator [N. Reingold et al., 1994]. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps. We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than 2 in the partial cost model, where an access to the i-th item in the current list incurs a cost of i-1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model. Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d

    Optimal Lower Bounds for Projective List Update Algorithms

    Full text link
    The list update problem is a classical online problem, with an optimal competitive ratio that is still open, known to be somewhere between 1.5 and 1.6. An algorithm with competitive ratio 1.6, the smallest known to date, is COMB, a randomized combination of BIT and the TIMESTAMP algorithm TS. This and almost all other list update algorithms, like MTF, are projective in the sense that they can be defined by looking only at any pair of list items at a time. Projectivity (also known as "list factoring") simplifies both the description of the algorithm and its analysis, and so far seems to be the only way to define a good online algorithm for lists of arbitrary length. In this paper we characterize all projective list update algorithms and show that their competitive ratio is never smaller than 1.6 in the partial cost model. Therefore, COMB is a best possible projective algorithm in this model.Comment: Version 3 same as version 2, but date in LaTeX \today macro replaced by March 8, 201

    On the List Update Problem with Advice

    Get PDF
    We study the online list update problem under the advice model of computation. Under this model, an online algorithm receives partial information about the unknown parts of the input in the form of some bits of advice generated by a benevolent offline oracle. We show that advice of linear size is required and sufficient for a deterministic algorithm to achieve an optimal solution or even a competitive ratio better than 15/1415/14. On the other hand, we show that surprisingly two bits of advice are sufficient to break the lower bound of 22 on the competitive ratio of deterministic online algorithms and achieve a deterministic algorithm with a competitive ratio of 5/35/3. In this upper-bound argument, the bits of advice determine the algorithm with smaller cost among three classical online algorithms, TIMESTAMP and two members of the MTF2 family of algorithms. We also show that MTF2 algorithms are 2.52.5-competitive

    Dynamic Algorithms for the Massively Parallel Computation Model

    Get PDF
    The Massive Parallel Computing (MPC) model gained popularity during the last decade and it is now seen as the standard model for processing large scale data. One significant shortcoming of the model is that it assumes to work on static datasets while, in practice, real-world datasets evolve continuously. To overcome this issue, in this paper we initiate the study of dynamic algorithms in the MPC model. We first discuss the main requirements for a dynamic parallel model and we show how to adapt the classic MPC model to capture them. Then we analyze the connection between classic dynamic algorithms and dynamic algorithms in the MPC model. Finally, we provide new efficient dynamic MPC algorithms for a variety of fundamental graph problems, including connectivity, minimum spanning tree and matching.Comment: Accepted to the 31st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2019
    corecore