2,287 research outputs found

    Opportunistic Error Correction for WLAN Applications

    Get PDF
    The current error correction layer of IEEE 802.11a WLAN is designed\ud for worst case scenarios, which often do not apply. In this paper,\ud we propose a new opportunistic error correction layer based on\ud Fountain codes and a resolution adaptive ADC. The key part in the\ud new proposed system is that only packets are processed by the\ud receiver chain which have encountered ``good'' channel conditions.\ud Others are discarded. With this new approach, around 23\frac{2}{3}\ud of the energy consumption can be saved compared with the\ud conventional IEEE 802.11a WLAN system under the same channel\ud conditions and throughput

    Deterministic Rateless Codes for BSC

    Full text link
    A rateless code encodes a finite length information word into an infinitely long codeword such that longer prefixes of the codeword can tolerate a larger fraction of errors. A rateless code achieves capacity for a family of channels if, for every channel in the family, reliable communication is obtained by a prefix of the code whose rate is arbitrarily close to the channel's capacity. As a result, a universal encoder can communicate over all channels in the family while simultaneously achieving optimal communication overhead. In this paper, we construct the first \emph{deterministic} rateless code for the binary symmetric channel. Our code can be encoded and decoded in O(β)O(\beta) time per bit and in almost logarithmic parallel time of O(βlogn)O(\beta \log n), where β\beta is any (arbitrarily slow) super-constant function. Furthermore, the error probability of our code is almost exponentially small exp(Ω(n/β))\exp(-\Omega(n/\beta)). Previous rateless codes are probabilistic (i.e., based on code ensembles), require polynomial time per bit for decoding, and have inferior asymptotic error probabilities. Our main technical contribution is a constructive proof for the existence of an infinite generating matrix that each of its prefixes induce a weight distribution that approximates the expected weight distribution of a random linear code

    Cross-Sender Bit-Mixing Coding

    Full text link
    Scheduling to avoid packet collisions is a long-standing challenge in networking, and has become even trickier in wireless networks with multiple senders and multiple receivers. In fact, researchers have proved that even {\em perfect} scheduling can only achieve R=O(1lnN)\mathbf{R} = O(\frac{1}{\ln N}). Here NN is the number of nodes in the network, and R\mathbf{R} is the {\em medium utilization rate}. Ideally, one would hope to achieve R=Θ(1)\mathbf{R} = \Theta(1), while avoiding all the complexities in scheduling. To this end, this paper proposes {\em cross-sender bit-mixing coding} ({\em BMC}), which does not rely on scheduling. Instead, users transmit simultaneously on suitably-chosen slots, and the amount of overlap in different user's slots is controlled via coding. We prove that in all possible network topologies, using BMC enables us to achieve R=Θ(1)\mathbf{R}=\Theta(1). We also prove that the space and time complexities of BMC encoding/decoding are all low-order polynomials.Comment: Published in the International Conference on Information Processing in Sensor Networks (IPSN), 201

    Nonasymptotic coding-rate bounds for binary erasure channels with feedback

    Get PDF
    We present nonasymptotic achievability and converse bounds on the maximum coding rate (for a fixed average error probability and a fixed average blocklength) of variable-length full-feedback (VLF) and variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC). For the VLF setup, the achievability bound relies on a scheme that maps each message onto a variable-length Huffman codeword and then repeats each bit of the codeword until it is received correctly. The converse bound is inspired by the meta-converse framework by Polyanskiy, Poor, and Verdú (2010) and relies on binary sequential hypothesis testing. For the case of zero error probability, our achievability and converse bounds match. For the VLSF case, we provide achievability bounds that exploit the following feature of BEC: the decoder can assess the correctness of its estimate by verifying whether the chosen codeword is the only one that is compatible with the erasure pattern. One of these bounds is obtained by analyzing the performance of a variable-length extension of random linear fountain codes. The gap between the VLSF achievability and the VLF converse bound, when number of messages is small, is significant: 23% for 8 messages on a BEC with erasure probability 0.5. The absence of a tight VLSF converse bound does not allow us to assess whether this gap is fundamental

    Variable-Length Coding with Feedback: Finite-Length Codewords and Periodic Decoding

    Full text link
    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of single-user memoryless channels. Recently Polyanskiy et al. studied the benefit of variable-length feedback with termination (VLFT) codes in the non-asymptotic regime. In that work, achievability is based on an infinite length random code and decoding is attempted at every symbol. The coding rate backoff from capacity due to channel dispersion is greatly reduced with feedback, allowing capacity to be approached with surprisingly small expected latency. This paper is mainly concerned with VLFT codes based on finite-length codes and decoding attempts only at certain specified decoding times. The penalties of using a finite block-length NN and a sequence of specified decoding times are studied. This paper shows that properly scaling NN with the expected latency can achieve the same performance up to constant terms as with N=N = \infty. The penalty introduced by periodic decoding times is a linear term of the interval between decoding times and hence the performance approaches capacity as the expected latency grows if the interval between decoding times grows sub-linearly with the expected latency.Comment: 8 pages. A shorten version is submitted to ISIT 201

    Upper-crustal seismic velocity heterogeneity as derived from a variety of P-wave sonic logs

    Get PDF
    Sonic-log measurements provide detailed 1-D information on the distribution of elastic properties within the upper crystalline crust at scales from about one metre to several kilometres. 10 P-wave sonic logs from six upper-crustal drill sites in Europe and North America have been analysed for their second-order statistics. The penetrated lithological sequences comprise Archean volcanic sequences, Proterozoic mafic layered intrusions, and Precambrian to Phanerozoic gneisses and granites. Despite this variability in geological setting, tectonic history, and petrological composition, there are notable similarities between the various data sets: after removing a large-scale, deterministic component from the observed velocity-depth function, the residual velocity fluctuations of all data sets can be described by autocovariance functions corresponding to band-limited self-affine stochastic processes with quasi-Gaussian probability density functions. Depending on the maximum spatial wavelength present in the stochastic part of the data, the deterministic trend can be approximated either by a low-order polynomial best fit or by a moving-average of the original sonic-log data. The choice of the trend has a significant impact on the correlation length and on the standard deviation of the residual stochastic component, but does not affect the Hurst number. For trends defined by low-order polynomial best fits, correlation lengths were found to range from 60 to 160 m, whereas for trends defined by a moving average the correlation lengths are dominated by the upper cut-off wavenumber of the corresponding filter. Regardless of the trend removed, the autocovariance functions of all data sets are characterised by low Hurst numbers of around 0.1-0.2, or equivalently by power spectra decaying as ∽ 1/k. A possible explanation of this statistical uniformity is that sonic-log fluctuations are more sensitive to the physical state, in particular to the distribution of cracks, than to the petrological composition of the probed rock
    corecore