20,309 research outputs found

    On the Derivative Imbalance and Ambiguity of Functions

    Full text link
    In 2007, Carlet and Ding introduced two parameters, denoted by NbFNb_F and NBFNB_F, quantifying respectively the balancedness of general functions FF between finite Abelian groups and the (global) balancedness of their derivatives DaF(x)=F(x+a)−F(x)D_a F(x)=F(x+a)-F(x), a∈G∖{0}a\in G\setminus\{0\} (providing an indicator of the nonlinearity of the functions). These authors studied the properties and cryptographic significance of these two measures. They provided for S-boxes inequalities relating the nonlinearity NL(F)\mathcal{NL}(F) to NBFNB_F, and obtained in particular an upper bound on the nonlinearity which unifies Sidelnikov-Chabaud-Vaudenay's bound and the covering radius bound. At the Workshop WCC 2009 and in its postproceedings in 2011, a further study of these parameters was made; in particular, the first parameter was applied to the functions F+LF+L where LL is affine, providing more nonlinearity parameters. In 2010, motivated by the study of Costas arrays, two parameters called ambiguity and deficiency were introduced by Panario \emph{et al.} for permutations over finite Abelian groups to measure the injectivity and surjectivity of the derivatives respectively. These authors also studied some fundamental properties and cryptographic significance of these two measures. Further studies followed without that the second pair of parameters be compared to the first one. In the present paper, we observe that ambiguity is the same parameter as NBFNB_F, up to additive and multiplicative constants (i.e. up to rescaling). We make the necessary work of comparison and unification of the results on NBFNB_F, respectively on ambiguity, which have been obtained in the five papers devoted to these parameters. We generalize some known results to any Abelian groups and we more importantly derive many new results on these parameters

    Joint Wyner-Ziv/Dirty Paper coding by modulo-lattice modulation

    Full text link
    The combination of source coding with decoder side-information (Wyner-Ziv problem) and channel coding with encoder side-information (Gel'fand-Pinsker problem) can be optimally solved using the separation principle. In this work we show an alternative scheme for the quadratic-Gaussian case, which merges source and channel coding. This scheme achieves the optimal performance by a applying modulo-lattice modulation to the analog source. Thus it saves the complexity of quantization and channel decoding, and remains with the task of "shaping" only. Furthermore, for high signal-to-noise ratio (SNR), the scheme approaches the optimal performance using an SNR-independent encoder, thus it is robust to unknown SNR at the encoder.Comment: Submitted to IEEE Transactions on Information Theory. Presented in part in ISIT-2006, Seattle. New version after revie

    Constructing practical Fuzzy Extractors using QIM

    Get PDF
    Fuzzy extractors are a powerful tool to extract randomness from noisy data. A fuzzy extractor can extract randomness only if the source data is discrete while in practice source data is continuous. Using quantizers to transform continuous data into discrete data is a commonly used solution. However, as far as we know no study has been made of the effect of the quantization strategy on the performance of fuzzy extractors. We construct the encoding and the decoding function of a fuzzy extractor using quantization index modulation (QIM) and we express properties of this fuzzy extractor in terms of parameters of the used QIM. We present and analyze an optimal (in the sense of embedding rate) two dimensional construction. Our 6-hexagonal tiling construction offers ( log2 6 / 2-1) approx. 3 extra bits per dimension of the space compared to the known square quantization based fuzzy extractor

    Density of Spherically-Embedded Stiefel and Grassmann Codes

    Full text link
    The density of a code is the fraction of the coding space covered by packing balls centered around the codewords. This paper investigates the density of codes in the complex Stiefel and Grassmann manifolds equipped with the chordal distance. The choice of distance enables the treatment of the manifolds as subspaces of Euclidean hyperspheres. In this geometry, the densest packings are not necessarily equivalent to maximum-minimum-distance codes. Computing a code's density follows from computing: i) the normalized volume of a metric ball and ii) the kissing radius, the radius of the largest balls one can pack around the codewords without overlapping. First, the normalized volume of a metric ball is evaluated by asymptotic approximations. The volume of a small ball can be well-approximated by the volume of a locally-equivalent tangential ball. In order to properly normalize this approximation, the precise volumes of the manifolds induced by their spherical embedding are computed. For larger balls, a hyperspherical cap approximation is used, which is justified by a volume comparison theorem showing that the normalized volume of a ball in the Stiefel or Grassmann manifold is asymptotically equal to the normalized volume of a ball in its embedding sphere as the dimension grows to infinity. Then, bounds on the kissing radius are derived alongside corresponding bounds on the density. Unlike spherical codes or codes in flat spaces, the kissing radius of Grassmann or Stiefel codes cannot be exactly determined from its minimum distance. It is nonetheless possible to derive bounds on density as functions of the minimum distance. Stiefel and Grassmann codes have larger density than their image spherical codes when dimensions tend to infinity. Finally, the bounds on density lead to refinements of the standard Hamming bounds for Stiefel and Grassmann codes.Comment: Two-column version (24 pages, 6 figures, 4 tables). To appear in IEEE Transactions on Information Theor
    • …
    corecore