15 research outputs found
Rethinking reliability for long-delay networks
Delay Tolerant Networking (DTN) is currently an open research area following the interest of space companies in the deployment of Internet protocols for the space Internet. Thus, these last years have seen an increase in the number of DTN protocol proposals such as Saratoga or LTP-T. However, the goal of these protocols are more to send much error-free data during a short contact time rather than operating to a strictly speaking reliable data transfer. Beside this, several research work have proposed efficient acknowledgment schemes based on the SNACK mechanism. However, these acknowledgement strategies are not compliant with the DTN protocol principle. In this paper, we propose a novel reliability mechanism with an implicit acknowledgment strategy that could be used either within these new DTN proposals or in the context of multicast transport protocols. This proposal is based on a new erasure coding concept specifically designed to operate efficient reliable transfer over bi-directional links
Cokernels of random matrices satisfy the Cohen-Lenstra heuristics
Let A be an n by n random matrix with iid entries taken from the p-adic
integers or Z/NZ. Then under mild non-degeneracy conditions the cokernel of A
has a universal probability distribution. In particular, the p-part of an iid
random matrix over the integers has cokernel distributed according to the
Cohen-Lenstra measure up to an exponentially small error.Comment: 21 pages; submitte
Tetrys : Un mécanisme de fiabilisation polyvalent
Actuellement, une succession dâerreurs qui ne peut ĂȘtre masquĂ©e par le mĂ©canisme de fiabilisation (code Ă effacement) doit attendre au minimum un RTT pour ĂȘtre corrigĂ©e, ce qui nâest souvent pas satisfaisant pour les application temps-rĂ©el. Les concepts apportĂ©s par la thĂ©orie du codage rĂ©seau (Network Coding) permettent aujourdâhui de combler le fossĂ© entre fiabilisation et fiabilitĂ© totale en s'abstrayant du concept dâARQ. Cet article prĂ©sente un mĂ©canisme innovant nommĂ© Tetrys dont lâune des caractĂ©ristiques est de pouvoir reconstruire les pertes dans un temps paramĂ©trable et indĂ©pendant du RTT. A notre meilleure connaissance, c'est la premiĂšre fois que les propriĂ©tĂ©s temps rĂ©el d'un tel mĂ©canisme sont Ă©noncĂ©es et Ă©tudiĂ©es. Intuitivement, les applications ciblĂ©es sont celles nĂ©cessitant une fiabilitĂ© totale avec contrainte de dĂ©lai. Il sâavĂšre quâĂ taux de redondance Ă©gal, des applications telles que la VoIP et la vidĂ©o-confĂ©rence sont bien plus performantes lorsque les flux sont protĂ©gĂ©s par le mĂ©canisme Tetrys que par les mĂ©canismes FEC ou H-ARQ classiques. AprĂšs un rapide rappel des points clĂ©s relatifs Ă FEC et H-ARQ, nous dĂ©crivons le principe de Tetrys et montrons son possible dĂ©ploiement. Nous comparons les performances de FEC, H-ARQ et Tetrys du point de vue applicatif Ă l'aide dâun prototype et suivant des mĂ©triques de dĂ©lai et dans le cadre de la VoIP, de qualitĂ© de la transmission (MOS)
Approximate Decoding Approaches for Network Coded Correlated Data
This paper considers a framework where data from correlated sources are
transmitted with help of network coding in ad-hoc network topologies. The
correlated data are encoded independently at sensors and network coding is
employed in the intermediate nodes in order to improve the data delivery
performance. In such settings, we focus on the problem of reconstructing the
sources at decoder when perfect decoding is not possible due to losses or
bandwidth bottlenecks. We first show that the source data similarity can be
used at decoder to permit decoding based on a novel and simple approximate
decoding scheme. We analyze the influence of the network coding parameters and
in particular the size of finite coding fields on the decoding performance. We
further determine the optimal field size that maximizes the expected decoding
performance as a trade-off between information loss incurred by limiting the
resolution of the source data and the error probability in the reconstructed
data. Moreover, we show that the performance of the approximate decoding
improves when the accuracy of the source model increases even with simple
approximate decoding techniques. We provide illustrative examples about the
possible of our algorithms that can be deployed in sensor networks and
distributed imaging applications. In both cases, the experimental results
confirm the validity of our analysis and demonstrate the benefits of our low
complexity solution for delivery of correlated data sources
Stein's method and the rank distribution of random matrices over finite fields
With the distribution of minus the rank of a matrix
chosen uniformly from the collection of all matrices over the
finite field of size , and the
distributional limit of as , we apply
Stein's method to prove the total variation bound
.
In addition, we obtain similar sharp results for the rank distributions of
symmetric, symmetric with zero diagonal, skew symmetric, skew centrosymmetric
and Hermitian matrices.Comment: Published at http://dx.doi.org/10.1214/13-AOP889 in the Annals of
Probability (http://www.imstat.org/aop/) by the Institute of Mathematical
Statistics (http://www.imstat.org
On-the-fly erasure coding for real-time video applications
This paper introduces a robust point-to-point transmission scheme: Tetrys,
that relies on a novel on-the-fly erasure coding concept which reduces the
delay for recovering lost data at the receiver side. In current erasure coding
schemes, the packets that are not rebuilt at the receiver side are either lost
or delayed by at least one RTT before transmission to the application. The
present contribution aims at demonstrating that Tetrys coding scheme can fill
the gap between real-time applications requirements and full reliability.
Indeed, we show that in several cases, Tetrys can recover lost packets below
one RTT over lossy and best-effort networks. We also show that Tetrys allows to
enable full reliability without delay compromise and as a result: significantly
improves the performance of time constrained applications. For instance, our
evaluations present that video-conferencing applications obtain a PSNR gain up
to 7dB compared to classic block-based erasure codes