7 research outputs found
Resource Tuned Optimal Random Network Coding for Single Hop Multicast future 5G Networks
Optimal random network coding is reduced complexity in computation of coding coefficients, computation of encoded packets and coefficients are such that minimal transmission bandwidth is enough to transmit coding coefficient to the destinations and decoding process can be carried out as soon as encoded packets are started being received at the destination and decoding process has lower computational complexity. But in traditional random network coding, decoding process is possible only after receiving all encoded packets at receiving nodes. Optimal random network coding also reduces the cost of computation. In this research work, coding coefficient matrix size is determined by the size of layers which defines the number of symbols or packets being involved in coding process. Coding coefficient matrix elements are defined such that it has minimal operations of addition and multiplication during coding and decoding process reducing computational complexity by introducing sparseness in coding coefficients and partial decoding is also possible with the given coding coefficient matrix with systematic sparseness in coding coefficients resulting lower triangular coding coefficients matrix. For the optimal utility of computational resources, depending upon the computational resources unoccupied such as memory available resources budget tuned windowing size is used to define the size of the coefficient matrix
Resource Tuned Optimal Random Network Coding for Single Hop Multicast future 5G Networks
Optimal random network coding is reduced complexity in computation of coding coefficients, computation of encoded packets and coefficients are such that minimal transmission bandwidth is enough to transmit coding coefficient to the destinations and decoding process can be carried out as soon as encoded packets are started being received at the destination and decoding process has lower computational complexity. But in traditional random network coding, decoding process is possible only after receiving all encoded packets at receiving nodes. Optimal random network coding also reduces the cost of computation. In this research work, coding coefficient matrix size is determined by the size of layers which defines the number of symbols or packets being involved in coding process. Coding coefficient matrix elements are defined such that it has minimal operations of addition and multiplication during coding and decoding process reducing computational complexity by introducing sparseness in coding coefficients and partial decoding is also possible with the given coding coefficient matrix with systematic sparseness in coding coefficients resulting lower triangular coding coefficients matrix. For the optimal utility of computational resources, depending upon the computational resources unoccupied such as memory available resources budget tuned windowing size is used to define the size of the coefficient matrix
Covering Grassmannian Codes: Bounds and Constructions
Grassmannian is the set of all -dimensional subspaces
of the vector space Recently, Etzion and Zhang introduced a
new notion called covering Grassmannian code which can be used in network
coding solutions for generalized combination networks. An
- covering Grassmannian code is a
subset of such that every set of codewords of
spans a subspace of dimension at least in
In this paper, we derive new upper and lower bounds on the
size of covering Grassmannian codes. These bounds improve and extend the
parameter range of known bounds.Comment: 17 page
Subspace packings
The Grassmannian Gq(n,k) is the set of all k-dimensional subspaces of the vector space Fnq. It is well known that codes in the Grassmannian space can be used for error-correction in random network coding. On the other hand, these codes are q-analogs of codes in the Johnson scheme, i.e. constant dimension codes. These codes of the Grassmannian Gq(n,k) also form a family of q-analogs of block designs and they are called \emph{subspace designs}. The application of subspace codes has motivated extensive work on the q-analogs of block designs. In this paper, we examine one of the last families of q-analogs of block designs which was not considered before. This family called \emph{subspace packings} is the q-analog of packings. This family of designs was considered recently for network coding solution for a family of multicast networks called the generalized combination networks. A \emph{subspace packing} t-(n,k,λ)mq is a set S of k-subspaces from Gq(n,k) such that each t-subspace of Gq(n,t) is contained in at most λ elements of S. The goal of this work is to consider the largest size of such subspace packings
Subspace Packings : Constructions and Bounds
The Grassmannian is the set of all -dimensional
subspaces of the vector space . K\"{o}tter and Kschischang
showed that codes in Grassmannian space can be used for error-correction in
random network coding. On the other hand, these codes are -analogs of codes
in the Johnson scheme, i.e., constant dimension codes. These codes of the
Grassmannian also form a family of -analogs of block
designs and they are called subspace designs. In this paper, we examine one of
the last families of -analogs of block designs which was not considered
before. This family, called subspace packings, is the -analog of packings,
and was considered recently for network coding solution for a family of
multicast networks called the generalized combination networks. A subspace
packing - is a set of -subspaces from
such that each -subspace of is
contained in at most elements of . The goal of this work
is to consider the largest size of such subspace packings. We derive a sequence
of lower and upper bounds on the maximum size of such packings, analyse these
bounds, and identify the important problems for further research in this area.Comment: 30 pages, 27 tables, continuation of arXiv:1811.04611, typos
correcte