358 research outputs found

    List decoding group homomorphisms between supersolvable groups

    Get PDF
    We show that the set of homomorphisms between two supersolvable groups can be locally list decoded up to the minimum distance of the code, extending the results of Dinur et al who studied the case where the groups are abelian. Moreover, when specialized to the abelian case, our proof is more streamlined and gives a better constant in the exponent of the list size. The constant is improved from about 3.5 million to 105.Comment: 11 page

    Delays and the Capacity of Continuous-time Channels

    Get PDF
    Any physical channel of communication offers two potential reasons why its capacity (the number of bits it can transmit in a unit of time) might be unbounded: (1) Infinitely many choices of signal strength at any given instant of time, and (2) Infinitely many instances of time at which signals may be sent. However channel noise cancels out the potential unboundedness of the first aspect, leaving typical channels with only a finite capacity per instant of time. The latter source of infinity seems less studied. A potential source of unreliability that might restrict the capacity also from the second aspect is delay: Signals transmitted by the sender at a given point of time may not be received with a predictable delay at the receiving end. Here we examine this source of uncertainty by considering a simple discrete model of delay errors. In our model the communicating parties get to subdivide time as microscopically finely as they wish, but still have to cope with communication delays that are macroscopic and variable. The continuous process becomes the limit of our process as the time subdivision becomes infinitesimal. We taxonomize this class of communication channels based on whether the delays and noise are stochastic or adversarial; and based on how much information each aspect has about the other when introducing its errors. We analyze the limits of such channels and reach somewhat surprising conclusions: The capacity of a physical channel is finitely bounded only if at least one of the two sources of error (signal noise or delay noise) is adversarial. In particular the capacity is finitely bounded only if the delay is adversarial, or the noise is adversarial and acts with knowledge of the stochastic delay. If both error sources are stochastic, or if the noise is adversarial and independent of the stochastic delay, then the capacity of the associated physical channel is infinite

    Patterns hidden from simple algorithms

    Get PDF
    Is the number 9021960864034418159813 random? Educated opinions might vary from “No! No single string can be random,” to the more contemptuous ”Come on! Those are just the 714th to 733rd digits of π.” Yet, to my limited mind, the string did appear random. Is there a way to use some formal mathematics to justify my naïveté? The modern theory of pseudorandomness indeed manages to explain such phenomena, where strings appear random to simple minds. The key, this theory argues, is that randomness is really in the “eyes of the beholder,” or rather in the computing power of the tester of randomness. More things appear random to simpler, or resource limited, algorithms than to complex, powerful, algorithms

    Communication Complexity of Permutation-Invariant Functions

    Full text link
    Motivated by the quest for a broader understanding of communication complexity of simple functions, we introduce the class of "permutation-invariant" functions. A partial function f:{0,1}n×{0,1}n{0,1,?}f:\{0,1\}^n \times \{0,1\}^n\to \{0,1,?\} is permutation-invariant if for every bijection π:{1,,n}{1,,n}\pi:\{1,\ldots,n\} \to \{1,\ldots,n\} and every x,y{0,1}n\mathbf{x}, \mathbf{y} \in \{0,1\}^n, it is the case that f(x,y)=f(xπ,yπ)f(\mathbf{x}, \mathbf{y}) = f(\mathbf{x}^{\pi}, \mathbf{y}^{\pi}). Most of the commonly studied functions in communication complexity are permutation-invariant. For such functions, we present a simple complexity measure (computable in time polynomial in nn given an implicit description of ff) that describes their communication complexity up to polynomial factors and up to an additive error that is logarithmic in the input size. This gives a coarse taxonomy of the communication complexity of simple functions. Our work highlights the role of the well-known lower bounds of functions such as 'Set-Disjointness' and 'Indexing', while complementing them with the relatively lesser-known upper bounds for 'Gap-Inner-Product' (from the sketching literature) and 'Sparse-Gap-Inner-Product' (from the recent work of Canonne et al. [ITCS 2015]). We also present consequences to the study of communication complexity with imperfectly shared randomness where we show that for total permutation-invariant functions, imperfectly shared randomness results in only a polynomial blow-up in communication complexity after an additive O(loglogn)O(\log \log n) overhead

    Optimal Error Rates for Interactive Coding I: Adaptivity and Other Settings

    Full text link
    We consider the task of interactive communication in the presence of adversarial errors and present tight bounds on the tolerable error-rates in a number of different settings. Most significantly, we explore adaptive interactive communication where the communicating parties decide who should speak next based on the history of the interaction. Braverman and Rao [STOC'11] show that non-adaptively one can code for any constant error rate below 1/4 but not more. They asked whether this bound could be improved using adaptivity. We answer this open question in the affirmative (with a slightly different collection of resources): Our adaptive coding scheme tolerates any error rate below 2/7 and we show that tolerating a higher error rate is impossible. We also show that in the setting of Franklin et al. [CRYPTO'13], where parties share randomness not known to the adversary, adaptivity increases the tolerable error rate from 1/2 to 2/3. For list-decodable interactive communications, where each party outputs a constant size list of possible outcomes, the tight tolerable error rate is 1/2. Our negative results hold even if the communication and computation are unbounded, whereas for our positive results communication and computation are polynomially bounded. Most prior work considered coding schemes with linear amount of communication, while allowing unbounded computations. We argue that studying tolerable error rates in this relaxed context helps to identify a setting's intrinsic optimal error rate. We set forward a strong working hypothesis which stipulates that for any setting the maximum tolerable error rate is independent of many computational and communication complexity measures. We believe this hypothesis to be a powerful guideline for the design of simple, natural, and efficient coding schemes and for understanding the (im)possibilities of coding for interactive communications
    corecore