162 research outputs found
On the weight distribution of convolutional codes
Detailed information about the weight distribution of a convolutional code is
given by the adjacency matrix of the state diagram associated with a controller
canonical form of the code. We will show that this matrix is an invariant of
the code. Moreover, it will be proven that codes with the same adjacency matrix
have the same dimension and the same Forney indices and finally that for
one-dimensional binary convolutional codes the adjacency matrix determines the
code uniquely up to monomial equivalence
Virtual Ground Truth, and Pre-selection of 3D Interest Points for Improved Repeatability Evaluation of 2D Detectors
In Computer Vision, finding simple features is performed using classifiers
called interest point (IP) detectors, which are often utilised to track
features as the scene changes. For 2D based classifiers it has been intuitive
to measure repeated point reliability using 2D metrics given the difficulty to
establish ground truth beyond 2D. The aim is to bridge the gap between 2D
classifiers and 3D environments, and improve performance analysis of 2D IP
classification on 3D objects. This paper builds on existing work with 3D
scanned and artificial models to test conventional 2D feature detectors with
the assistance of virtualised 3D scenes. Virtual space depth is leveraged in
tests to perform pre-selection of closest repeatable points in both 2D and 3D
contexts before repeatability is measured. This more reliable ground truth is
used to analyse testing configurations with a singular and 12 model dataset
across affine transforms in x, y and z rotation, as well as x,y scaling with 9
well known IP detectors. The virtual scene's ground truth demonstrates that 3D
pre-selection eliminates a large portion of false positives that are normally
considered repeated in 2D configurations. The results indicate that 3D virtual
environments can provide assistance in comparing the performance of
conventional detectors when extending their applications to 3D environments,
and can result in better classification of features when testing prospective
classifiers' performance. A ROC based informedness measure also highlights
tradeoffs in 2D/3D performance compared to conventional repeatability measures.Comment: Accepted for publication in CCVPR 2018 Conference Proceedings,
Wellington, New Zealand. 11 pages, 5 figure
Virtual ground truth, and pre-selection of 3D interest points for improved repeatability evaluation of 2D detectors
In Computer Vision, finding simple features is performed using classifiers called interest point (IP) detectors, which are often utilised to track features as the scene changes. For 2D based classifiers it has been intuitive to measure repeated point reliability using 2D metrics given the difficulty to establish ground truth beyond 2D. The aim is to bridge the gap between 2D classifiers and 3D environments, and improve performance analysis of 2D IP classification on 3D objects. This paper builds on existing work with 3D scanned and artificial models to test conventional 2D feature detectors with the assistance of virtualised 3D scenes. Virtual space depth is leveraged in tests to perform pre-selection of closest repeatable points in both 2D and 3D contexts before repeatability is measured. This more reliable ground truth is used to analyse testing configurations with a singular and 12 model dataset across affine transforms in x, y and z rotation, as well as x, y scaling with 9 well known IP detectors. The virtual scene's ground truth demonstrates that 3D preselection eliminates a large portion of false positives that are normally considered repeated in 2D configurations. The results indicate that 3D virtual environments can provide assistance in comparing the performance of conventional detectors when extending their applications to 3D environments, and can result in better classification of features when testing prospective classifiers' performance. A ROC based informedness measure also highlights tradeoffs in 2D/3D performance compared to conventional repeatability measures
Asymptotic bounds for the sizes of constant dimension codes and an improved lower bound
We study asymptotic lower and upper bounds for the sizes of constant
dimension codes with respect to the subspace or injection distance, which is
used in random linear network coding. In this context we review known upper
bounds and show relations between them. A slightly improved version of the
so-called linkage construction is presented which is e.g. used to construct
constant dimension codes with subspace distance , dimension of the
codewords for all field sizes , and sufficiently large dimensions of the
ambient space, that exceed the MRD bound, for codes containing a lifted MRD
code, by Etzion and Silberstein.Comment: 30 pages, 3 table
Column Rank Distances of Rank Metric Convolutional Codes
In this paper, we deal with the so-called multi-shot network coding, meaning that the network is used several times (shots) to propagate the information. The framework we present is slightly more general than the one which can be found in the literature. We study and introduce the notion of column rank distance of rank metric convolutional codes for any given rate and finite field. Within this new framework we generalize previous results on column distances of Hamming and rank metric convolutional codes [3, 8]. This contribution can be considered as a continuation follow-up of the work presented in [10]
Decoding of 2D convolutional codes over an erasure channel
In this paper we address the problem of decoding 2D convolutional codes over an erasure channel. To this end we introduce the notion of neighbors around a set of erasures which can be considered an analogue of the notion of sliding window in the context of 1D convolutional codes. The main idea is to reduce the decoding problem of 2D convolutional codes to a problem of decoding a set of associated 1D convolutional codes. We first show how to recover sets of erasures that are distributed on vertical, horizontal and diagonal lines. Finally we outline some ideas to treat any set of erasures distributed randomly on the 2D plane. © 2016 AIMS
A state space approach to periodic convolutional codes
In this paper we study periodically time-varying convolutional
codes by means of input-state-output representations. Using these
representations we investigate under which conditions a given time-invariant
convolutional code can be transformed into an equivalent periodic
time-varying one. The relation between these two classes of convolutional
codes is studied for period 2. We illustrate the ideas presented in this
paper by constructing a periodic time-varying convolutional code from a
time-invariant one. The resulting periodic code has larger free distance
than any time-invariant convolutional code with equivalent parameters
- …