333 research outputs found

    Delays and the Capacity of Continuous-time Channels

    Get PDF
    Any physical channel of communication offers two potential reasons why its capacity (the number of bits it can transmit in a unit of time) might be unbounded: (1) Infinitely many choices of signal strength at any given instant of time, and (2) Infinitely many instances of time at which signals may be sent. However channel noise cancels out the potential unboundedness of the first aspect, leaving typical channels with only a finite capacity per instant of time. The latter source of infinity seems less studied. A potential source of unreliability that might restrict the capacity also from the second aspect is delay: Signals transmitted by the sender at a given point of time may not be received with a predictable delay at the receiving end. Here we examine this source of uncertainty by considering a simple discrete model of delay errors. In our model the communicating parties get to subdivide time as microscopically finely as they wish, but still have to cope with communication delays that are macroscopic and variable. The continuous process becomes the limit of our process as the time subdivision becomes infinitesimal. We taxonomize this class of communication channels based on whether the delays and noise are stochastic or adversarial; and based on how much information each aspect has about the other when introducing its errors. We analyze the limits of such channels and reach somewhat surprising conclusions: The capacity of a physical channel is finitely bounded only if at least one of the two sources of error (signal noise or delay noise) is adversarial. In particular the capacity is finitely bounded only if the delay is adversarial, or the noise is adversarial and acts with knowledge of the stochastic delay. If both error sources are stochastic, or if the noise is adversarial and independent of the stochastic delay, then the capacity of the associated physical channel is infinite

    Streaming Lower Bounds for Approximating MAX-CUT

    Full text link
    We consider the problem of estimating the value of max cut in a graph in the streaming model of computation. At one extreme, there is a trivial 22-approximation for this problem that uses only O(logn)O(\log n) space, namely, count the number of edges and output half of this value as the estimate for max cut value. On the other extreme, if one allows O~(n)\tilde{O}(n) space, then a near-optimal solution to the max cut value can be obtained by storing an O~(n)\tilde{O}(n)-size sparsifier that essentially preserves the max cut. An intriguing question is if poly-logarithmic space suffices to obtain a non-trivial approximation to the max-cut value (that is, beating the factor 22). It was recently shown that the problem of estimating the size of a maximum matching in a graph admits a non-trivial approximation in poly-logarithmic space. Our main result is that any streaming algorithm that breaks the 22-approximation barrier requires Ω~(n)\tilde{\Omega}(\sqrt{n}) space even if the edges of the input graph are presented in random order. Our result is obtained by exhibiting a distribution over graphs which are either bipartite or 12\frac{1}{2}-far from being bipartite, and establishing that Ω~(n)\tilde{\Omega}(\sqrt{n}) space is necessary to differentiate between these two cases. Thus as a direct corollary we obtain that Ω~(n)\tilde{\Omega}(\sqrt{n}) space is also necessary to test if a graph is bipartite or 12\frac{1}{2}-far from being bipartite. We also show that for any ϵ>0\epsilon > 0, any streaming algorithm that obtains a (1+ϵ)(1 + \epsilon)-approximation to the max cut value when edges arrive in adversarial order requires n1O(ϵ)n^{1 - O(\epsilon)} space, implying that Ω(n)\Omega(n) space is necessary to obtain an arbitrarily good approximation to the max cut value
    corecore