257,314 research outputs found

    A Systematic Approach to Incremental Redundancy over Erasure Channels

    Full text link
    As sensing and instrumentation play an increasingly important role in systems controlled over wired and wireless networks, the need to better understand delay-sensitive communication becomes a prime issue. Along these lines, this article studies the operation of data links that employ incremental redundancy as a practical means to protect information from the effects of unreliable channels. Specifically, this work extends a powerful methodology termed sequential differential optimization to choose near-optimal block sizes for hybrid ARQ over erasure channels. In doing so, an interesting connection between random coding and well-known constants in number theory is established. Furthermore, results show that the impact of the coding strategy adopted and the propensity of the channel to erase symbols naturally decouple when analyzing throughput. Overall, block size selection is motivated by normal approximations on the probability of decoding success at every stage of the incremental transmission process. This novel perspective, which rigorously bridges hybrid ARQ and coding, offers a pragmatic means to select code rates and blocklengths for incremental redundancy.Comment: 7 pages, 2 figures; A shorter version of this article will appear in the proceedings of ISIT 201

    On an almost-universal hash function family with applications to authentication and secrecy codes

    Get PDF
    Universal hashing, discovered by Carter and Wegman in 1979, has many important applications in computer science. MMH^*, which was shown to be Δ\Delta-universal by Halevi and Krawczyk in 1997, is a well-known universal hash function family. We introduce a variant of MMH^*, that we call GRDH, where we use an arbitrary integer n>1n>1 instead of prime pp and let the keys x=x1,,xkZnk\mathbf{x}=\langle x_1, \ldots, x_k \rangle \in \mathbb{Z}_n^k satisfy the conditions gcd(xi,n)=ti\gcd(x_i,n)=t_i (1ik1\leq i\leq k), where t1,,tkt_1,\ldots,t_k are given positive divisors of nn. Then via connecting the universal hashing problem to the number of solutions of restricted linear congruences, we prove that the family GRDH is an ε\varepsilon-almost-Δ\Delta-universal family of hash functions for some ε<1\varepsilon<1 if and only if nn is odd and gcd(xi,n)=ti=1\gcd(x_i,n)=t_i=1 (1ik)(1\leq i\leq k). Furthermore, if these conditions are satisfied then GRDH is 1p1\frac{1}{p-1}-almost-Δ\Delta-universal, where pp is the smallest prime divisor of nn. Finally, as an application of our results, we propose an authentication code with secrecy scheme which strongly generalizes the scheme studied by Alomair et al. [{\it J. Math. Cryptol.} {\bf 4} (2010), 121--148], and [{\it J.UCS} {\bf 15} (2009), 2937--2956].Comment: International Journal of Foundations of Computer Science, to appea

    I Don't Want to Think About it Now:Decision Theory With Costly Computation

    Full text link
    Computation plays a major role in decision making. Even if an agent is willing to ascribe a probability to all states and a utility to all outcomes, and maximize expected utility, doing so might present serious computational problems. Moreover, computing the outcome of a given act might be difficult. In a companion paper we develop a framework for game theory with costly computation, where the objects of choice are Turing machines. Here we apply that framework to decision theory. We show how well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on), belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions), and the status quo bias (people are much more likely to stick with what they already have) can be easily captured in that framework. Finally, we use the framework to define some new notions: value of computational information (a computational variant of value of information) and and computational value of conversation.Comment: In Conference on Knowledge Representation and Reasoning (KR '10

    Computing a k-sparse n-length Discrete Fourier Transform using at most 4k samples and O(k log k) complexity

    Full text link
    Given an nn-length input signal \mbf{x}, it is well known that its Discrete Fourier Transform (DFT), \mbf{X}, can be computed in O(nlogn)O(n \log n) complexity using a Fast Fourier Transform (FFT). If the spectrum \mbf{X} is exactly kk-sparse (where k<<nk<<n), can we do better? We show that asymptotically in kk and nn, when kk is sub-linear in nn (precisely, knδk \propto n^{\delta} where 0<δ<10 < \delta <1), and the support of the non-zero DFT coefficients is uniformly random, we can exploit this sparsity in two fundamental ways (i) {\bf {sample complexity}}: we need only M=rkM=rk deterministically chosen samples of the input signal \mbf{x} (where r<4r < 4 when 0<δ<0.990 < \delta < 0.99); and (ii) {\bf {computational complexity}}: we can reliably compute the DFT \mbf{X} using O(klogk)O(k \log k) operations, where the constants in the big Oh are small and are related to the constants involved in computing a small number of DFTs of length approximately equal to the sparsity parameter kk. Our algorithm succeeds with high probability, with the probability of failure vanishing to zero asymptotically in the number of samples acquired, MM.Comment: 36 pages, 15 figures. To be presented at ISIT-2013, Istanbul Turke

    There is entanglement in the primes

    Full text link
    Large series of prime numbers can be superposed on a single quantum register and then analyzed in full parallelism. The construction of this Prime state is efficient, as it hinges on the use of a quantum version of any efficient primality test. We show that the Prime state turns out to be very entangled as shown by the scaling properties of purity, Renyi entropy and von Neumann entropy. An analytical approximation to these measures of entanglement can be obtained from the detailed analysis of the entanglement spectrum of the Prime state, which in turn produces new insights in the Hardy-Littlewood conjecture for the pairwise distribution of primes. The extension of these ideas to a Twin Prime state shows that this new state is even more entangled than the Prime state, obeying majorization relations. We further discuss the construction of quantum states that encompass relevant series of numbers and opens the possibility of applying quantum computation to Arithmetics in novel ways.Comment: 30 pages, 11 Figs. Addition of two references and correction of typo
    corecore