7 research outputs found

    Resource Tuned Optimal Random Network Coding for Single Hop Multicast future 5G Networks

    Get PDF
    Optimal random network coding is reduced complexity in computation of coding coefficients, computation of encoded packets and coefficients are such that minimal transmission bandwidth is enough to transmit coding coefficient to the destinations and decoding process can be carried out as soon as encoded packets are started being received at the destination and decoding process has lower computational complexity. But in traditional random network coding, decoding process is possible only after receiving all encoded packets at receiving nodes. Optimal random network coding also reduces the cost of computation. In this research work, coding coefficient matrix size is determined by the size of layers which defines the number of symbols or packets being involved in coding process. Coding coefficient matrix elements are defined such that it has minimal operations of addition and multiplication during coding and decoding process reducing computational complexity by introducing sparseness in coding coefficients and partial decoding is also possible with the given coding coefficient matrix with systematic sparseness in coding coefficients resulting lower triangular coding coefficients matrix. For the optimal utility of computational resources, depending upon the computational resources unoccupied such as memory available resources budget tuned windowing size is used to define the size of the coefficient matrix

    Resource Tuned Optimal Random Network Coding for Single Hop Multicast future 5G Networks

    Get PDF
    Optimal random network coding is reduced complexity in computation of coding coefficients, computation of encoded packets and coefficients are such that minimal transmission bandwidth is enough to transmit coding coefficient to the destinations and decoding process can be carried out as soon as encoded packets are started being received at the destination and decoding process has lower computational complexity. But in traditional random network coding, decoding process is possible only after receiving all encoded packets at receiving nodes. Optimal random network coding also reduces the cost of computation. In this research work, coding coefficient matrix size is determined by the size of layers which defines the number of symbols or packets being involved in coding process. Coding coefficient matrix elements are defined such that it has minimal operations of addition and multiplication during coding and decoding process reducing computational complexity by introducing sparseness in coding coefficients and partial decoding is also possible with the given coding coefficient matrix with systematic sparseness in coding coefficients resulting lower triangular coding coefficients matrix. For the optimal utility of computational resources, depending upon the computational resources unoccupied such as memory available resources budget tuned windowing size is used to define the size of the coefficient matrix

    Covering Grassmannian Codes: Bounds and Constructions

    Full text link
    Grassmannian Gq(n,k)\mathcal{G}_q(n,k) is the set of all kk-dimensional subspaces of the vector space Fqn.\mathbb{F}_q^n. Recently, Etzion and Zhang introduced a new notion called covering Grassmannian code which can be used in network coding solutions for generalized combination networks. An α\alpha-(n,k,δ)qc(n,k,\delta)_q^c covering Grassmannian code C\mathcal{C} is a subset of Gq(n,k)\mathcal{G}_q(n,k) such that every set of α\alpha codewords of C\mathcal{C} spans a subspace of dimension at least δ+k\delta +k in Fqn.\mathbb{F}_q^n. In this paper, we derive new upper and lower bounds on the size of covering Grassmannian codes. These bounds improve and extend the parameter range of known bounds.Comment: 17 page

    Subspace packings

    Get PDF
    The Grassmannian Gq(n,k) is the set of all k-dimensional subspaces of the vector space Fnq. It is well known that codes in the Grassmannian space can be used for error-correction in random network coding. On the other hand, these codes are q-analogs of codes in the Johnson scheme, i.e. constant dimension codes. These codes of the Grassmannian Gq(n,k) also form a family of q-analogs of block designs and they are called \emph{subspace designs}. The application of subspace codes has motivated extensive work on the q-analogs of block designs. In this paper, we examine one of the last families of q-analogs of block designs which was not considered before. This family called \emph{subspace packings} is the q-analog of packings. This family of designs was considered recently for network coding solution for a family of multicast networks called the generalized combination networks. A \emph{subspace packing} t-(n,k,λ)mq is a set S of k-subspaces from Gq(n,k) such that each t-subspace of Gq(n,t) is contained in at most λ elements of S. The goal of this work is to consider the largest size of such subspace packings

    Subspace Packings : Constructions and Bounds

    Get PDF
    The Grassmannian Gq(n,k)\mathcal{G}_q(n,k) is the set of all kk-dimensional subspaces of the vector space Fqn\mathbb{F}_q^n. K\"{o}tter and Kschischang showed that codes in Grassmannian space can be used for error-correction in random network coding. On the other hand, these codes are qq-analogs of codes in the Johnson scheme, i.e., constant dimension codes. These codes of the Grassmannian Gq(n,k)\mathcal{G}_q(n,k) also form a family of qq-analogs of block designs and they are called subspace designs. In this paper, we examine one of the last families of qq-analogs of block designs which was not considered before. This family, called subspace packings, is the qq-analog of packings, and was considered recently for network coding solution for a family of multicast networks called the generalized combination networks. A subspace packing tt-(n,k,λ)q(n,k,\lambda)_q is a set S\mathcal{S} of kk-subspaces from Gq(n,k)\mathcal{G}_q(n,k) such that each tt-subspace of Gq(n,t)\mathcal{G}_q(n,t) is contained in at most λ\lambda elements of S\mathcal{S}. The goal of this work is to consider the largest size of such subspace packings. We derive a sequence of lower and upper bounds on the maximum size of such packings, analyse these bounds, and identify the important problems for further research in this area.Comment: 30 pages, 27 tables, continuation of arXiv:1811.04611, typos correcte
    corecore