42,946 research outputs found

    Efficient motion planning for problems lacking optimal substructure

    Full text link
    We consider the motion-planning problem of planning a collision-free path of a robot in the presence of risk zones. The robot is allowed to travel in these zones but is penalized in a super-linear fashion for consecutive accumulative time spent there. We suggest a natural cost function that balances path length and risk-exposure time. Specifically, we consider the discrete setting where we are given a graph, or a roadmap, and we wish to compute the minimal-cost path under this cost function. Interestingly, paths defined using our cost function do not have an optimal substructure. Namely, subpaths of an optimal path are not necessarily optimal. Thus, the Bellman condition is not satisfied and standard graph-search algorithms such as Dijkstra cannot be used. We present a path-finding algorithm, which can be seen as a natural generalization of Dijkstra's algorithm. Our algorithm runs in O((nBn)log(nBn)+nBm)O\left((n_B\cdot n) \log( n_B\cdot n) + n_B\cdot m\right) time, where~nn and mm are the number of vertices and edges of the graph, respectively, and nBn_B is the number of intersections between edges and the boundary of the risk zone. We present simulations on robotic platforms demonstrating both the natural paths produced by our cost function and the computational efficiency of our algorithm

    On Deletion in Delaunay Triangulation

    Get PDF
    This paper presents how the space of spheres and shelling may be used to delete a point from a dd-dimensional triangulation efficiently. In dimension two, if k is the degree of the deleted vertex, the complexity is O(k log k), but we notice that this number only applies to low cost operations, while time consuming computations are only done a linear number of times. This algorithm may be viewed as a variation of Heller's algorithm, which is popular in the geographic information system community. Unfortunately, Heller algorithm is false, as explained in this paper.Comment: 15 pages 5 figures. in Proc. 15th Annu. ACM Sympos. Comput. Geom., 181--188, 199

    Two Compact Incremental Prime Sieves

    Get PDF
    A prime sieve is an algorithm that finds the primes up to a bound nn. We say that a prime sieve is incremental, if it can quickly determine if n+1n+1 is prime after having found all primes up to nn. We say a sieve is compact if it uses roughly n\sqrt{n} space or less. In this paper we present two new results: (1) We describe the rolling sieve, a practical, incremental prime sieve that takes O(nloglogn)O(n\log\log n) time and O(nlogn)O(\sqrt{n}\log n) bits of space, and (2) We show how to modify the sieve of Atkin and Bernstein (2004) to obtain a sieve that is simultaneously sublinear, compact, and incremental. The second result solves an open problem given by Paul Pritchard in 1994

    Incremental and Decremental Maintenance of Planar Width

    Full text link
    We present an algorithm for maintaining the width of a planar point set dynamically, as points are inserted or deleted. Our algorithm takes time O(kn^epsilon) per update, where k is the amount of change the update causes in the convex hull, n is the number of points in the set, and epsilon is any arbitrarily small constant. For incremental or decremental update sequences, the amortized time per update is O(n^epsilon).Comment: 7 pages; 2 figures. A preliminary version of this paper was presented at the 10th ACM/SIAM Symp. Discrete Algorithms (SODA '99); this is the journal version, and will appear in J. Algorithm

    Fully-dynamic Approximation of Betweenness Centrality

    Full text link
    Betweenness is a well-known centrality measure that ranks the nodes of a network according to their participation in shortest paths. Since an exact computation is prohibitive in large networks, several approximation algorithms have been proposed. Besides that, recent years have seen the publication of dynamic algorithms for efficient recomputation of betweenness in evolving networks. In previous work we proposed the first semi-dynamic algorithms that recompute an approximation of betweenness in connected graphs after batches of edge insertions. In this paper we propose the first fully-dynamic approximation algorithms (for weighted and unweighted undirected graphs that need not to be connected) with a provable guarantee on the maximum approximation error. The transfer to fully-dynamic and disconnected graphs implies additional algorithmic problems that could be of independent interest. In particular, we propose a new upper bound on the vertex diameter for weighted undirected graphs. For both weighted and unweighted graphs, we also propose the first fully-dynamic algorithms that keep track of such upper bound. In addition, we extend our former algorithm for semi-dynamic BFS to batches of both edge insertions and deletions. Using approximation, our algorithms are the first to make in-memory computation of betweenness in fully-dynamic networks with millions of edges feasible. Our experiments show that they can achieve substantial speedups compared to recomputation, up to several orders of magnitude

    Best-Choice Edge Grafting for Efficient Structure Learning of Markov Random Fields

    Full text link
    Incremental methods for structure learning of pairwise Markov random fields (MRFs), such as grafting, improve scalability by avoiding inference over the entire feature space in each optimization step. Instead, inference is performed over an incrementally grown active set of features. In this paper, we address key computational bottlenecks that current incremental techniques still suffer by introducing best-choice edge grafting, an incremental, structured method that activates edges as groups of features in a streaming setting. The method uses a reservoir of edges that satisfy an activation condition, approximating the search for the optimal edge to activate. It also reorganizes the search space using search-history and structure heuristics. Experiments show a significant speedup for structure learning and a controllable trade-off between the speed and quality of learning

    Generating Representative ISP Technologies From First-Principles

    Full text link
    Understanding and modeling the factors that underlie the growth and evolution of network topologies are basic questions that impact capacity planning, forecasting, and protocol research. Early topology generation work focused on generating network-wide connectivity maps, either at the AS-level or the router-level, typically with an eye towards reproducing abstract properties of observed topologies. But recently, advocates of an alternative "first-principles" approach question the feasibility of realizing representative topologies with simple generative models that do not explicitly incorporate real-world constraints, such as the relative costs of router configurations, into the model. Our work synthesizes these two lines by designing a topology generation mechanism that incorporates first-principles constraints. Our goal is more modest than that of constructing an Internet-wide topology: we aim to generate representative topologies for single ISPs. However, our methods also go well beyond previous work, as we annotate these topologies with representative capacity and latency information. Taking only demand for network services over a given region as input, we propose a natural cost model for building and interconnecting PoPs and formulate the resulting optimization problem faced by an ISP. We devise hill-climbing heuristics for this problem and demonstrate that the solutions we obtain are quantitatively similar to those in measured router-level ISP topologies, with respect to both topological properties and fault-tolerance
    corecore