156,401 research outputs found

    Lower bounds for weak sense of direction

    Get PDF
    AbstractA graph with n vertices and maximum degree Δ cannot be given weak sense of direction using less than Δ colours. It is known that n colours are always sufficient, but it has been conjectured that just Δ+1 are really needed. On the contrary, we show that for sufficiently large n there are graphs requiring Δ+Ω((nloglogn)/logn) colours. Moreover, we prove that, in terms of the maximum degree, Ω(ΔloglogΔ) colours are necessary

    Deterministic Symmetry Breaking in Ring Networks

    Full text link
    We study a distributed coordination mechanism for uniform agents located on a circle. The agents perform their actions in synchronised rounds. At the beginning of each round an agent chooses the direction of its movement from clockwise, anticlockwise, or idle, and moves at unit speed during this round. Agents are not allowed to overpass, i.e., when an agent collides with another it instantly starts moving with the same speed in the opposite direction (without exchanging any information with the other agent). However, at the end of each round each agent has access to limited information regarding its trajectory of movement during this round. We assume that nn mobile agents are initially located on a circle unit circumference at arbitrary but distinct positions unknown to other agents. The agents are equipped with unique identifiers from a fixed range. The {\em location discovery} task to be performed by each agent is to determine the initial position of every other agent. Our main result states that, if the only available information about movement in a round is limited to %information about distance between the initial and the final position, then there is a superlinear lower bound on time needed to solve the location discovery problem. Interestingly, this result corresponds to a combinatorial symmetry breaking problem, which might be of independent interest. If, on the other hand, an agent has access to the distance to its first collision with another agent in a round, we design an asymptotically efficient and close to optimal solution for the location discovery problem.Comment: Conference version accepted to ICDCS 201

    Identification with Imperfect Instruments

    Get PDF
    Dealing with endogenous regressors is a central challenge of applied research. The standard solution is to use instrumental variables that are assumed to be uncorrelated with unobservables. We instead assume (i) the correlation between the instrument and the error term has the same sign as the correlation between the endogenous regressor and the error term, and (ii) that the instrument is less correlated with the error term than is the endogenous regressor. Using these assumptions, we derive analytic bounds for the parameters. We demonstrate the method in two applications

    Compactness results for immersions of prescribed Gaussian curvature I - analytic aspects

    Get PDF
    We extend recent results of Guan and Spruck, proving existence results for constant Gaussian curvature hypersurfaces in Hadamard manifolds.Comment: New title. Previously, "Constant Gaussian Curvature Hypersurfaces in Hadamard Manifolds". Mistakes corrected. Introduction revised to reflect recent developments in the fiel

    Optimal Transport and Ricci Curvature: Wasserstein Space Over the Interval

    Full text link
    In this essay, we discuss the notion of optimal transport on geodesic measure spaces and the associated (2-)Wasserstein distance. We then examine displacement convexity of the entropy functional on the space of probability measures. In particular, we give a detailed proof that the Lott-Villani-Sturm notion of generalized Ricci bounds agree with the classical notion on smooth manifolds. We also give the proof that generalized Ricci bounds are preserved under Gromov-Hausdorff convergence. In particular, we examine in detail the space of probability measures over the interval, P(X)P(X) equipped with the Wasserstein metric dWd^W. We show that this metric space is isometric to a totally convex subset of a Hilbert space, L2[0,1]L^2[0,1], which allows for concrete calculations, contrary to the usual state of affairs in the theory of optimal transport. We prove explicitly that (P(X),dW)(P(X),d^W) has vanishing Alexandrov curvature, and give an easy to work with expression for the entropy functional on this space. In addition, we examine finite dimensional Gromov-Hausdorff approximations to this space, and use these to construct a measure on the limit space, the entropic measure first considered by Von Renesse and Sturm. We examine properties of the measure, in particular explaining why one would expect it to have generalized Ricci lower bounds. We then show that this is in fact not true. We also discuss the possibility and consequences of finding a different measure which does admit generalized Ricci lower bounds.Comment: 47 pages, 9 figure

    A Primal-Dual Convergence Analysis of Boosting

    Full text link
    Boosting combines weak learners into a predictor with low empirical risk. Its dual constructs a high entropy distribution upon which weak learners and training labels are uncorrelated. This manuscript studies this primal-dual relationship under a broad family of losses, including the exponential loss of AdaBoost and the logistic loss, revealing: - Weak learnability aids the whole loss family: for any {\epsilon}>0, O(ln(1/{\epsilon})) iterations suffice to produce a predictor with empirical risk {\epsilon}-close to the infimum; - The circumstances granting the existence of an empirical risk minimizer may be characterized in terms of the primal and dual problems, yielding a new proof of the known rate O(ln(1/{\epsilon})); - Arbitrary instances may be decomposed into the above two, granting rate O(1/{\epsilon}), with a matching lower bound provided for the logistic loss.Comment: 40 pages, 8 figures; the NIPS 2011 submission "The Fast Convergence of Boosting" is a brief presentation of the primary results; compared with the JMLR version, this arXiv version has hyperref and some formatting tweak
    • …
    corecore