1,015 research outputs found
Computing Extensions of Linear Codes
This paper deals with the problem of increasing the minimum distance of a
linear code by adding one or more columns to the generator matrix. Several
methods to compute extensions of linear codes are presented. Many codes
improving the previously known lower bounds on the minimum distance have been
found.Comment: accepted for publication at ISIT 0
Algorithm 1033: Parallel Implementations for Computing the Minimum Distance of a Random Linear Code on Distributed-memory Architectures
This is the accepted version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published inACM Transactions on Mathematical Software. Volume 49, Issue 1, https://doi.org/10.1145/3573383The minimum distance of a linear code is a key concept in information theory. Therefore, the time required by its computation is very important to many problems in this area. In this article, we introduce a family of implementations of the BrouwerāZimmermann algorithm for distributed-memory architectures for computing the minimum distance of a random linear code over 2. Both current commercial and public-domain software only work on either unicore architectures or shared-memory architectures, which are limited in the number of cores/processors employed in the computation. Our implementations focus on distributed-memory architectures, thus being able to employ hundreds or even thousands of cores in the computation of the minimum distance. Our experimental results show that our implementations are much faster, even up to several orders of magnitude, than current implementations widely used nowadays.The authors would like to thank the University of Alicante for granting access to the ua cluster. They also want to thank Javier Navarrete for his assistance and support when working on this machine. The authors would also like to thank Robert A. van de Geijn from the University of Texas at Austin for granting access to the skx cluster.Quintana-OrtĆ was supported by the Spanish Ministry of Science, Innovation and Universities under Grant RTI2018-098156-B-C54 co-financed by FEDER funds.
Hernando was supported by the Spanish Ministry of Science, Innovation and Universities under Grants PGC2018-096446-B-C21 and PGC2018-096446-B-C22, and by University Jaume I under Grant PB1-1B2018-10.
Igual was supported by Grants PID2021-126576NB-I00 and RTI2018-B-I00, funded by MCIN/AEI/10.13039/501100011033
and by āERDF A way of making Europeā, and the Spanish CM (S2018/TCS-4423). This work has been supported by the Madrid Government (Comunidad de Madrid, Spain) under the Multiannual Agreement with Complutense University in the line Program to Stimulate Research for Young Doctors in the context of the V PRICIT (Regional Programme of Research and Technological Innovation) under project PR65-19/22445
An Extension of the BrouwerāZimmermann Algorithm for Calculating the MinimumWeight of a Linear Code
A modification of the BrouwerāZimmermann algorithm for calculating the minimum weight of a linear code over a finite field is presented. The aim was to reduce the number of codewords for consideration. The reduction is significant in cases where the length of a code is not divisible by its dimensions. The proposed algorithm can also be used to find all codewords of weight less than a given constant. The algorithm is implemented in the software package QextNewEdition
Automated searching for quantum subsystem codes
Quantum error correction allows for faulty quantum systems to behave in an
effectively error free manner. One important class of techniques for quantum
error correction is the class of quantum subsystem codes, which are relevant
both to active quantum error correcting schemes as well as to the design of
self-correcting quantum memories. Previous approaches for investigating these
codes have focused on applying theoretical analysis to look for interesting
codes and to investigate their properties. In this paper we present an
alternative approach that uses computational analysis to accomplish the same
goals. Specifically, we present an algorithm that computes the optimal quantum
subsystem code that can be implemented given an arbitrary set of measurement
operators that are tensor products of Pauli operators. We then demonstrate the
utility of this algorithm by performing a systematic investigation of the
quantum subsystem codes that exist in the setting where the interactions are
limited to 2-body interactions between neighbors on lattices derived from the
convex uniform tilings of the plane.Comment: 38 pages, 15 figure, 10 tables. The algorithm described in this paper
is available as both library and a command line program (including full
source code) that can be downloaded from
http://github.com/gcross/CodeQuest/downloads. The source code used to apply
the algorithm to scan the lattices is available upon request. Please feel
free to contact the authors with question
Efficient representation of binary nonlinear codes : constructions and minimum distance computation
Combinatorics, Coding and Security Group (CCSG)A binary nonlinear code can be represented as a union of cosets of a binary linear subcode. In this paper, the complexity of some algorithms to obtain this representation is analyzed. Moreover, some properties and constructions of new codes from given ones in terms of this representation are described. Algorithms to compute the minimum distance of binary nonlinear codes, based on known algorithms for linear codes, are also established, along with an algorithm to decode such codes. All results are written in such a way that they can be easily transformed into algorithms, and the performance of these algorithms is evaluated
The improvement of strategic crops production via a goal programming model with novel multi-interval weights
Nowadays, the need to increase agricultural production has becomes a challenging task for most of the countries. Generally, there are many resource factors which affect the deterioration of production level, such as low water level, desertification, soil salinity, low on capital, lack of equipment, impact of export and import of crops, lack of fertilizers, pesticide, and the ineffective role of agricultural extension services
which are significant in this sector. The main objective of this research is to develop fuzzy goal programming (FGP) model to improve agricultural crop production, leading to increased agricultural benefits (more tons of produce per acre) based on
the minimization of the main resources (water, fertilizer and pesticide) to determine the weight in the objectives function subject to different constraints (land area, irrigation, labour, fertilizer, pesticide, equipment and seed). FGP and GP were utilized to solve multi-objective decision making problems (MODM). From the results, this research has successfully presented a new alternative method which introduced multi-interval weights in solving a multi-objective FGP and GP model problem in a fuzzy manner, in the current uncertain decision making environment for the agricultural sector. The significance of this research lies in the fact that some of the farming zones have resource limitations while others adversely impact their environment due to misuse of resources. Finally, the model was used to determine
the efficiency of each farming zone over the others in terms of resource utilization
Algebraic Codes For Error Correction In Digital Communication Systems
Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 29.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/899) on 20.12.2016 by CS (TIS).C. Shannon presented theoretical conditions under which communication was possible
error-free in the presence of noise. Subsequently the notion of using error
correcting codes to mitigate the effects of noise in digital transmission was introduced
by R. Hamming. Algebraic codes, codes described using powerful tools from
algebra took to the fore early on in the search for good error correcting codes. Many
classes of algebraic codes now exist and are known to have the best properties of
any known classes of codes. An error correcting code can be described by three of its
most important properties length, dimension and minimum distance. Given codes
with the same length and dimension, one with the largest minimum distance will
provide better error correction. As a result the research focuses on finding improved
codes with better minimum distances than any known codes.
Algebraic geometry codes are obtained from curves. They are a culmination of years
of research into algebraic codes and generalise most known algebraic codes. Additionally
they have exceptional distance properties as their lengths become arbitrarily
large. Algebraic geometry codes are studied in great detail with special attention
given to their construction and decoding. The practical performance of these codes
is evaluated and compared with previously known codes in different communication
channels. Furthermore many new codes that have better minimum distance
to the best known codes with the same length and dimension are presented from
a generalised construction of algebraic geometry codes. Goppa codes are also an
important class of algebraic codes. A construction of binary extended Goppa codes
is generalised to codes with nonbinary alphabets and as a result many new codes
are found. This construction is shown as an efficient way to extend another well
known class of algebraic codes, BCH codes. A generic method of shortening codes
whilst increasing the minimum distance is generalised. An analysis of this method
reveals a close relationship with methods of extending codes. Some new codes from
Goppa codes are found by exploiting this relationship. Finally an extension method
for BCH codes is presented and this method is shown be as good as a well known
method of extension in certain cases
A STUDY OF LINEAR ERROR CORRECTING CODES
Since Shannon's ground-breaking work in 1948, there have been two main development streams
of channel coding in approaching the limit of communication channels, namely classical coding
theory which aims at designing codes with large minimum Hamming distance and probabilistic
coding which places the emphasis on low complexity probabilistic decoding using long codes built
from simple constituent codes. This work presents some further investigations in these two channel
coding development streams.
Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse
parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary
LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents
and Mattson-Solomon polynomials, and are complementary to each other. The two methods
generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and
projective geometry codes. Their extension to non binary fields is shown to be straightforward.
These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative
decoding. It is also shown that for some of these codes, maximum likelihood performance may
be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords
of the dual code for each iteration.
Following a property of the revolving-door combination generator, multi-threaded minimum
Hamming distance computation algorithms are developed. Using these algorithms, the previously
unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated.
In addition, the highest minimum Hamming distance attainable by all binary cyclic codes
of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes
which have higher minimum Hamming distance than the previously considered best known linear
code have been found.
It is shown that by exploiting the structure of circulant matrices, the number of codewords
required, to compute the minimum Hamming distance and the number of codewords of a given
Hamming weight of binary double-circulant codes based on primes, may be reduced. A means
of independently verifying the exhaustively computed number of codewords of a given Hamming
weight of these double-circulant codes is developed and in coiyunction with this, it is proved that
some published results are incorrect and the correct weight spectra are presented. Moreover, it is
shown that it is possible to estimate the minimum Hamming distance of this family of prime-based
double-circulant codes.
It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch
algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection
mechanism that offers much better throughput and performance than the conventional ORG
scheme is described. Using the same method it is shown that the performance of conventional CRC
scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy
communications system and it is shown that sequences of good error correction codes,
suitable for use in incremental redundancy communications systems may be obtained using the
Constructions X and XX. Examples are given and their performances presented in comparison to
conventional CRC schemes
Deep sequencing approaches for the analysis of prokaryotic transcriptional boundaries and dynamics
The identification of the protein-coding regions of a genome is straightforward due to the universality of start and stop codons. However, the boundaries of the transcribed regions, conditional operon structures, non-coding RNAs and the dynamics of transcription, such as pausing of elongation, are non-trivial to identify, even in the comparatively simple genomes of prokaryotes. Traditional methods for the study of these areas, such as tiling arrays, are noisy, labour-intensive and lack the resolution required for densely-packed bacterial genomes. Recently, deep sequencing has become increasingly popular for the study of the transcriptome due to its lower costs, higher accuracy and single nucleotide resolution. These methods have revolutionised our understanding of prokaryotic transcriptional dynamics. Here, we review the deep sequencing and data analysis techniques that are available for the study of transcription in prokaryotes, and discuss the bioinformatic considerations of these analyses
- ā¦