32,067 research outputs found

    Smoothed Analysis on Connected Graphs

    Get PDF
    The main paradigm of smoothed analysis on graphs suggests that for any large graph G in a certain class of graphs, perturbing slightly the edges of G at random (usually adding few random edges to G) typically results in a graph having much "nicer" properties. In this work we study smoothed analysis on trees or, equivalently, on connected graphs. Given an n-vertex connected graph G, form a random supergraph of G* of G by turning every pair of vertices of G into an edge with probability epsilon/n, where epsilon is a small positive constant. This perturbation model has been studied previously in several contexts, including smoothed analysis, small world networks, and combinatorics. Connected graphs can be bad expanders, can have very large diameter, and possibly contain no long paths. In contrast, we show that if G is an n-vertex connected graph then typically G* has edge expansion Omega(1/(log n)), diameter O(log n), vertex expansion Omega(1/(log n)), and contains a path of length Omega(n), where for the last two properties we additionally assume that G has bounded maximum degree. Moreover, we show that if G has bounded degeneracy, then typically the mixing time of the lazy random walk on G* is O(log^2(n)). All these results are asymptotically tight

    Smoothed Analysis of Dynamic Networks

    Full text link
    We generalize the technique of smoothed analysis to distributed algorithms in dynamic network models. Whereas standard smoothed analysis studies the impact of small random perturbations of input values on algorithm performance metrics, dynamic graph smoothed analysis studies the impact of random perturbations of the underlying changing network graph topologies. Similar to the original application of smoothed analysis, our goal is to study whether known strong lower bounds in dynamic network models are robust or fragile: do they withstand small (random) perturbations, or do such deviations push the graphs far enough from a precise pathological instance to enable much better performance? Fragile lower bounds are likely not relevant for real-world deployment, while robust lower bounds represent a true difficulty caused by dynamic behavior. We apply this technique to three standard dynamic network problems with known strong worst-case lower bounds: random walks, flooding, and aggregation. We prove that these bounds provide a spectrum of robustness when subjected to smoothing---some are extremely fragile (random walks), some are moderately fragile / robust (flooding), and some are extremely robust (aggregation).Comment: 20 page

    Theoretical Foundations of Autoregressive Models for Time Series on Acyclic Directed Graphs

    Get PDF
    Three classes of models for time series on acyclic directed graphs are considered. At first a review of tree-structured models constructed from a nested partitioning of the observation interval is given. This nested partitioning leads to several resolution scales. The concept of mass balance allowing to interpret the average over an interval as the sum of averages over the sub-intervals implies linear restrictions in the tree-structured model. Under a white noise assumption for transition and observation noise there is an change-of-resolution Kalman filter for linear least squares prediction of interval averages \shortcite{chou:1991}. This class of models is generalized by modeling transition noise on the same scale in linear state space form. The third class deals with models on a more general class of directed acyclic graphs where nodes are allowed to have two parents. We show that these models have a linear state space representation with white system and coloured observation noise

    Arcfinder: An algorithm for the automatic detection of gravitational arcs

    Full text link
    We present an efficient algorithm designed for and capable of detecting elongated, thin features such as lines and curves in astronomical images, and its application to the automatic detection of gravitational arcs. The algorithm is sufficiently robust to detect such features even if their surface brightness is near the pixel noise in the image, yet the amount of spurious detections is low. The algorithm subdivides the image into a grid of overlapping cells which are iteratively shifted towards a local centre of brightness in their immediate neighbourhood. It then computes the ellipticity for each cell, and combines cells with correlated ellipticities into objects. These are combined to graphs in a next step, which are then further processed to determine properties of the detected objects. We demonstrate the operation and the efficiency of the algorithm applying it to HST images of galaxy clusters known to contain gravitational arcs. The algorithm completes the analysis of an image with 3000x3000 pixels in about 4 seconds on an ordinary desktop PC. We discuss further applications, the method's remaining problems and possible approaches to their solution.Comment: 12 pages, 12 figure
    • …
    corecore