26 research outputs found

    Finding an Approximate Maximum

    Full text link

    Communication complexity of approximate maximum matching in the message-passing model

    Get PDF
    We consider the communication complexity of finding an approximate maximum matching in a graph in a multi-party message-passing communication model. The maximum matching problem is one of the most fundamental graph combinatorial problems, with a variety of applications. The input to the problem is a graph G that has n vertices and the set of edges partitioned over k sites, and an approximation ratio parameter α. The output is required to be a matching in G that has to be reported by one of the sites, whose size is at least factor α of the size of a maximum matching in G. We show that the communication complexity of this problem is Ω(α2kn)information bits. This bound is shown to be tight up to a log n factor, by constructing an algorithm, establishing its correctness, and an upper bound on the communication cost. The lower bound also applies to other graph combinatorial problems in the message-passing communication model, including max-flow and graph sparsification

    Distributed Maximum Matching in Bounded Degree Graphs

    Full text link
    We present deterministic distributed algorithms for computing approximate maximum cardinality matchings and approximate maximum weight matchings. Our algorithm for the unweighted case computes a matching whose size is at least (1-\eps) times the optimal in \Delta^{O(1/\eps)} + O\left(\frac{1}{\eps^2}\right) \cdot\log^*(n) rounds where nn is the number of vertices in the graph and Δ\Delta is the maximum degree. Our algorithm for the edge-weighted case computes a matching whose weight is at least (1-\eps) times the optimal in \log(\min\{1/\wmin,n/\eps\})^{O(1/\eps)}\cdot(\Delta^{O(1/\eps)}+\log^*(n)) rounds for edge-weights in [\wmin,1]. The best previous algorithms for both the unweighted case and the weighted case are by Lotker, Patt-Shamir, and Pettie~(SPAA 2008). For the unweighted case they give a randomized (1-\eps)-approximation algorithm that runs in O((\log(n)) /\eps^3) rounds. For the weighted case they give a randomized (1/2-\eps)-approximation algorithm that runs in O(\log(\eps^{-1}) \cdot \log(n)) rounds. Hence, our results improve on the previous ones when the parameters Δ\Delta, \eps and \wmin are constants (where we reduce the number of runs from O(log(n))O(\log(n)) to O(log(n))O(\log^*(n))), and more generally when Δ\Delta, 1/\eps and 1/\wmin are sufficiently slowly increasing functions of nn. Moreover, our algorithms are deterministic rather than randomized.Comment: arXiv admin note: substantial text overlap with arXiv:1402.379

    Transforming Comparison Model Lower Bounds to the PRAM

    Get PDF
    This note provides general transformations of lower bounds in Valiant'sparallel comparison decision tree model to lower bounds in the priorityconcurrent-read concurrent-write parallel-random-access-machine model.The proofs rely on standard Ramsey-theoretic arguments that simplifythe structure of the computation by restricting the input domain. Thetransformation of comparison model lower bounds, which are usually easierto obtain, to the parallel-random-access-machine, unifies some knownlower bounds and gives new lower bounds for several problems

    MEMS 411: The Jolley Trolley

    Get PDF
    Since the advent of architecture, cranes have been an essential tool in moving and placing heavy loads. Even in modern times, cranes of all sorts are still being used. This project focuses on the design, fabrication, and testing of a small scale model of an overhead gantry crane to be used by our customer, Dr. Jackson Potter, as a classroom demonstration of the engineering techniques used to control a real crane

    From Random Search to Bandit Learning in Metric Measure Spaces

    Full text link
    Random Search is one of the most widely-used method for Hyperparameter Optimization, and is critical to the success of deep learning models. Despite its astonishing performance, little non-heuristic theory has been developed to describe the underlying working mechanism. This paper gives a theoretical accounting of Random Search. We introduce the concept of \emph{scattering dimension} that describes the landscape of the underlying function, and quantifies the performance of random search. We show that, when the environment is noise-free, the output of random search converges to the optimal value in probability at rate O~((1T)1ds) \widetilde{\mathcal{O}} \left( \left( \frac{1}{T} \right)^{ \frac{1}{d_s} } \right) , where ds0 d_s \ge 0 is the scattering dimension of the underlying function. When the observed function values are corrupted by bounded iidiid noise, the output of random search converges to the optimal value in probability at rate O~((1T)1ds+1) \widetilde{\mathcal{O}} \left( \left( \frac{1}{T} \right)^{ \frac{1}{d_s + 1} } \right) . In addition, based on the principles of random search, we introduce an algorithm, called BLiN-MOS, for Lipschitz bandits in doubling metric spaces that are also endowed with a Borel measure, and show that BLiN-MOS achieves a regret rate of order O~(Tdzdz+1) \widetilde{\mathcal{O}} \left( T^{ \frac{d_z}{d_z + 1} } \right) , where dzd_z is the zooming dimension of the problem instance. Our results show that under certain conditions, the known information-theoretical lower bounds for Lipschitz bandits Ω(Tdz+1dz+2)\Omega \left( T^{\frac{d_z+1}{d_z+2}} \right) can be improved

    Distributed Load Balancing: A New Framework and Improved Guarantees

    Get PDF
    corecore