967 research outputs found
Upper Bound Scalability on Achievable Rates of Batched Codes for Line Networks
The capacity of line networks with buffer size constraints is an open, but
practically important problem. In this paper, the upper bound on the achievable
rate of a class of codes, called batched codes, is studied for line networks.
Batched codes enable a range of buffer size constraints, and are general enough
to include special coding schemes studied in the literature for line networks.
Existing works have characterized the achievable rates of batched codes for
several classes of parameter sets, but leave the cut-set bound as the best
existing general upper bound. In this paper, we provide upper bounds on the
achievable rates of batched codes as functions of line network length for these
parameter sets. Our upper bounds are tight in order of the network length
compared with the existing achievability results.Comment: 6 pages, 1 tabl
Reliable Broadcast to A User Group with Limited Source Transmissions
In order to reduce the number of retransmissions and save power for the
source node, we propose a two-phase coded scheme to achieve reliable broadcast
from the source to a group of users with minimal source transmissions. In the
first phase, the information packets are encoded with batched sparse (BATS)
code, which are then broadcasted by the source node until the file can be
cooperatively decoded by the user group. In the second phase, each user
broadcasts the re-encoded packets to its peers based on their respective
received packets from the first phase, so that the file can be decoded by each
individual user. The performance of the proposed scheme is analyzed and the
rank distribution at the moment of decoding is derived, which is used as input
for designing the optimal BATS code. Simulation results show that the proposed
scheme can reduce the total number of retransmissions compared with the
traditional single-phase broadcast with optimal erasure codes. Furthermore,
since a large number of transmissions are shifted from the source node to the
users, power consumptions at the source node is significantly reduced.Comment: ICC 2015. arXiv admin note: substantial text overlap with
arXiv:1504.0446
Structured Random Linear Codes (SRLC): Bridging the Gap between Block and Convolutional Codes
Several types of AL-FEC (Application-Level FEC) codes for the Packet Erasure
Channel exist. Random Linear Codes (RLC), where redundancy packets consist of
random linear combinations of source packets over a certain finite field, are a
simple yet efficient coding technique, for instance massively used for Network
Coding applications. However the price to pay is a high encoding and decoding
complexity, especially when working on , which seriously limits the
number of packets in the encoding window. On the opposite, structured block
codes have been designed for situations where the set of source packets is
known in advance, for instance with file transfer applications. Here the
encoding and decoding complexity is controlled, even for huge block sizes,
thanks to the sparse nature of the code and advanced decoding techniques that
exploit this sparseness (e.g., Structured Gaussian Elimination). But their
design also prevents their use in convolutional use-cases featuring an encoding
window that slides over a continuous set of incoming packets.
In this work we try to bridge the gap between these two code classes,
bringing some structure to RLC codes in order to enlarge the use-cases where
they can be efficiently used: in convolutional mode (as any RLC code), but also
in block mode with either tiny, medium or large block sizes. We also
demonstrate how to design compact signaling for these codes (for
encoder/decoder synchronization), which is an essential practical aspect.Comment: 7 pages, 12 figure
Bolt: Accelerated Data Mining with Fast Vector Compression
Vectors of data are at the heart of machine learning and data mining.
Recently, vector quantization methods have shown great promise in reducing both
the time and space costs of operating on vectors. We introduce a vector
quantization algorithm that can compress vectors over 12x faster than existing
techniques while also accelerating approximate vector operations such as
distance and dot product computations by up to 10x. Because it can encode over
2GB of vectors per second, it makes vector quantization cheap enough to employ
in many more circumstances. For example, using our technique to compute
approximate dot products in a nested loop can multiply matrices faster than a
state-of-the-art BLAS implementation, even when our algorithm must first
compress the matrices.
In addition to showing the above speedups, we demonstrate that our approach
can accelerate nearest neighbor search and maximum inner product search by over
100x compared to floating point operations and up to 10x compared to other
vector quantization methods. Our approximate Euclidean distance and dot product
computations are not only faster than those of related algorithms with slower
encodings, but also faster than Hamming distance computations, which have
direct hardware support on the tested platforms. We also assess the errors of
our algorithm's approximate distances and dot products, and find that it is
competitive with existing, slower vector quantization algorithms.Comment: Research track paper at KDD 201
V2X Content Distribution Based on Batched Network Coding with Distributed Scheduling
Content distribution is an application in intelligent transportation system
to assist vehicles in acquiring information such as digital maps and
entertainment materials. In this paper, we consider content distribution from a
single roadside infrastructure unit to a group of vehicles passing by it. To
combat the short connection time and the lossy channel quality, the downloaded
contents need to be further shared among vehicles after the initial
broadcasting phase. To this end, we propose a joint infrastructure-to-vehicle
(I2V) and vehicle-to-vehicle (V2V) communication scheme based on batched sparse
(BATS) coding to minimize the traffic overhead and reduce the total
transmission delay. In the I2V phase, the roadside unit (RSU) encodes the
original large-size file into a number of batches in a rateless manner, each
containing a fixed number of coded packets, and sequentially broadcasts them
during the I2V connection time. In the V2V phase, vehicles perform the network
coded cooperative sharing by re-encoding the received packets. We propose a
utility-based distributed algorithm to efficiently schedule the V2V cooperative
transmissions, hence reducing the transmission delay. A closed-form expression
for the expected rank distribution of the proposed content distribution scheme
is derived, which is used to design the optimal BATS code. The performance of
the proposed content distribution scheme is evaluated by extensive simulations
that consider multi-lane road and realistic vehicular traffic settings, and
shown to significantly outperform the existing content distribution protocols.Comment: 12 pages and 9 figure
On Achievable Rates of Line Networks with Generalized Batched Network Coding
To better understand the wireless network design with a large number of hops,
we investigate a line network formed by general discrete memoryless channels
(DMCs), which may not be identical. Our focus lies on Generalized Batched
Network Coding (GBNC) that encompasses most existing schemes as special cases
and achieves the min-cut upper bounds as the parameters batch size and inner
block length tend to infinity. The inner blocklength of GBNC provides upper
bounds on the required latency and buffer size at intermediate network nodes.
By employing a bottleneck status technique, we derive new upper bounds on the
achievable rates of GBNCs These bounds surpass the min-cut bound for large
network lengths when the inner blocklength and batch size are small. For line
networks of canonical channels, certain upper bounds hold even with relaxed
inner blocklength constraints. Additionally, we employ a channel reduction
technique to generalize the existing achievability results for line networks
with identical DMCs to networks with non-identical DMCs. For line networks with
packet erasure channels, we make refinement in both the upper bound and the
coding scheme, and showcase their proximity through numerical evaluations.Comment: This paper was presented in part at ISIT 2019 and 2020, and is
accepted by a JSAC special issu
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
- …