119 research outputs found
On the similarities between generalized rank and Hamming weights and their applications to network coding
Rank weights and generalized rank weights have been proven to characterize
error and erasure correction, and information leakage in linear network coding,
in the same way as Hamming weights and generalized Hamming weights describe
classical error and erasure correction, and information leakage in wire-tap
channels of type II and code-based secret sharing. Although many similarities
between both cases have been established and proven in the literature, many
other known results in the Hamming case, such as bounds or characterizations of
weight-preserving maps, have not been translated to the rank case yet, or in
some cases have been proven after developing a different machinery. The aim of
this paper is to further relate both weights and generalized weights, show that
the results and proofs in both cases are usually essentially the same, and see
the significance of these similarities in network coding. Some of the new
results in the rank case also have new consequences in the Hamming case
ISOMORPHIC SIGNAL ENSEMBLES AND THEIR APPLICATION IN ASYNC-ADDRESS SYSTEMS
The object of consideration is async-address systems using code division of subscribers. The subject of the analysis is quasi-orthogonal ensembles of signals based on code sequences that have normalized characteristics of cross-correlation functions (CCF) and provide reliable separation of subscribers (objects) when exposed to imitation and signal-like interference. The purpose of the analysis is to create a model and methodology for construction a set of the best code sequences ensembles having the ability to quickly change the instance of the set to counter imitation and signal-like interference. The solution is based on algebraic models of code sequences and their CCF representation.
The article proposes a comprehensive technique to construct signal ensembles set having normalized characteristics of the CCF. The quality of the primary ensemble of code sequences is ensured by the procedure for calculating the CCF optimized in the number of look over options. Optimization is based on the basic properties of the Galois field, in particular, on the Galois fields’ isomorphism property. It provides a significant reduction in calculations when choosing the primary ensemble of code sequences with the specified properties of the CCF. The very choice of the best (largest in size) code sequences ensemble relies on the solution of one of the classical combinatorics problems – searching for maximal clique on a graph. The construction of signals ensembles set having normalized characteristics of the CCF is ensured by the use of special combinatorial procedures and algorithms based on the multiplicative properties of Galois fields. An analysis of the effectiveness of known and proven procedures searching for maximal clique is also performed in this article. The work results will be useful in the design of infocommunication systems using complex signals with a large base and variable structure to provide protection from signal structure research and the effects of imitation and signal-like interferenc
On the discrete logarithm problem in finite fields of fixed characteristic
For a prime power, the discrete logarithm problem (DLP) in
consists in finding, for any
and , an integer such that . We present
an algorithm for computing discrete logarithms with which we prove that for
each prime there exist infinitely many explicit extension fields
in which the DLP can be solved in expected quasi-polynomial
time. Furthermore, subject to a conjecture on the existence of irreducible
polynomials of a certain form, the algorithm solves the DLP in all extensions
in expected quasi-polynomial time.Comment: 15 pages, 2 figures. To appear in Transactions of the AM
On Index Calculus Algorithms for Subfield Curves
In this paper we further the study of index calculus methods for solving the elliptic curve discrete logarithm problem (ECDLP). We focus on the index calculus for subfield curves, also called Koblitz curves, defined over Fq with ECDLP in Fqn. Instead of accelerating the solution of polynomial systems during index calculus as was predominantly done in previous work, we define factor bases that are invariant under the q-power Frobenius automorphism of the field Fqn, reducing the number of polynomial systems that need to be solved. A reduction by a factor of 1/n is the best one could hope for. We show how to choose factor bases to achieve this, while simultaneously accelerating the linear algebra step of the index calculus method for Koblitz curves by a factor n2. Furthermore, we show how to use the Frobenius endomorphism to improve symmetry breaking for Koblitz curves. We provide constructions of factor bases with the desired properties, and we study their impact on the polynomial system solving costs experimentally.SCOPUS: cp.kinfo:eu-repo/semantics/publishe
Deterministic root finding over finite fields using Graeffe transforms
We design new deterministic algorithms, based on Graeffe transforms, to compute all the roots of a polynomial which splits over a finite field F q . Our algorithms were designed to be particularly efficient in the case when the cardinality q − 1 of the multiplicative group of F q is smooth. Such fields are often used in practice because they support fast discrete Fourier transforms. We also present a new nearly optimal algorithm for computing characteristic polynomials of multiplication endomorphisms in finite field extensions. This algorithm allows for the efficient computation of Graeffe transforms of arbitrary orders
Fast Fourier transform via automorphism groups of rational function fields
The Fast Fourier Transform (FFT) over a finite field computes
evaluations of a given polynomial of degree less than at a specifically
chosen set of distinct evaluation points in . If or
is a smooth number, then the divide-and-conquer approach leads to the fastest
known FFT algorithms. Depending on the type of group that the set of evaluation
points forms, these algorithms are classified as multiplicative (Math of Comp.
1965) and additive (FOCS 2014) FFT algorithms. In this work, we provide a
unified framework for FFT algorithms that include both multiplicative and
additive FFT algorithms as special cases, and beyond: our framework also works
when is smooth, while all known results require or to be
smooth. For the new case where is smooth (this new case was not
considered before in literature as far as we know), we show that if is a
divisor of that is -smooth for a real , then our FFT needs
arithmetic operations in . Our unified framework is
a natural consequence of introducing the algebraic function fields into the
study of FFT
- …