16,833 research outputs found

    Constructions of Snake-in-the-Box Codes for Rank Modulation

    Full text link
    Snake-in-the-box code is a Gray code which is capable of detecting a single error. Gray codes are important in the context of the rank modulation scheme which was suggested recently for representing information in flash memories. For a Gray code in this scheme the codewords are permutations, two consecutive codewords are obtained by using the "push-to-the-top" operation, and the distance measure is defined on permutations. In this paper the Kendall's τ\tau-metric is used as the distance measure. We present a general method for constructing such Gray codes. We apply the method recursively to obtain a snake of length M2n+1=((2n+1)(2n)1)M2n1M_{2n+1}=((2n+1)(2n)-1)M_{2n-1} for permutations of S2n+1S_{2n+1}, from a snake of length M2n1M_{2n-1} for permutations of~S2n1S_{2n-1}. Thus, we have limnM2n+1S2n+10.4338\lim\limits_{n\to \infty} \frac{M_{2n+1}}{S_{2n+1}}\approx 0.4338, improving on the previous known ratio of limn1πn\lim\limits_{n\to \infty} \frac{1}{\sqrt{\pi n}}. By using the general method we also present a direct construction. This direct construction is based on necklaces and it might yield snakes of length (2n+1)!22n+1\frac{(2n+1)!}{2} -2n+1 for permutations of S2n+1S_{2n+1}. The direct construction was applied successfully for S7S_7 and S9S_9, and hence limnM2n+1S2n+10.4743\lim\limits_{n\to \infty} \frac{M_{2n+1}}{S_{2n+1}}\approx 0.4743.Comment: IEEE Transactions on Information Theor

    Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning

    Get PDF
    Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's Holographic Reduced Representations and Kanerva's Binary Spatter Codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this paper we consider procedures of the Context-Dependent Thinning which were developed for representation of complex hierarchical items in the architecture of Associative-Projective Neural Networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, Context-Dependent Thinning preserves the same low density (or sparseness) of the bound codevector for varied number of component codevectors. Besides, a bound codevector is not only similar to another one with similar component codevectors (as in other schemes), but it is also similar to the component codevectors themselves. This allows the similarity of structures to be estimated just by the overlap of their codevectors, without retrieval of the component codevectors. This also allows an easy retrieval of the component codevectors. Examples of algorithmic and neural-network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role-filler and predicate-arguments representation schemes, trees, directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional AI, as well as to the localist and microfeature-based connectionist representations

    Ground-state coding in partially connected neural networks

    Get PDF
    Patterns over (-1,0,1) define, by their outer products, partially connected neural networks, consisting of internally strongly connected, externally weakly connected subnetworks. The connectivity patterns may have highly organized structures, such as lattices and fractal trees or nests. Subpatterns over (-1,1) define the subcodes stored in the subnetwork, that agree in their common bits. It is first shown that the code words are locally stable stares of the network, provided that each of the subcodes consists of mutually orthogonal words or of, at most, two words. Then it is shown that if each of the subcodes consists of two orthogonal words, the code words are the unique ground states (absolute minima) of the Hamiltonian associated with the network. The regions of attraction associated with the code words are shown to grow with the number of subnetworks sharing each of the neurons. Depending on the particular network architecture, the code sizes of partially connected networks can be vastly greater than those of fully connected ones and their error correction capabilities can be significantly greater than those of the disconnected subnetworks. The codes associated with lattice-structured and hierarchical networks are discussed in some detail

    On the Derivative Imbalance and Ambiguity of Functions

    Full text link
    In 2007, Carlet and Ding introduced two parameters, denoted by NbFNb_F and NBFNB_F, quantifying respectively the balancedness of general functions FF between finite Abelian groups and the (global) balancedness of their derivatives DaF(x)=F(x+a)F(x)D_a F(x)=F(x+a)-F(x), aG{0}a\in G\setminus\{0\} (providing an indicator of the nonlinearity of the functions). These authors studied the properties and cryptographic significance of these two measures. They provided for S-boxes inequalities relating the nonlinearity NL(F)\mathcal{NL}(F) to NBFNB_F, and obtained in particular an upper bound on the nonlinearity which unifies Sidelnikov-Chabaud-Vaudenay's bound and the covering radius bound. At the Workshop WCC 2009 and in its postproceedings in 2011, a further study of these parameters was made; in particular, the first parameter was applied to the functions F+LF+L where LL is affine, providing more nonlinearity parameters. In 2010, motivated by the study of Costas arrays, two parameters called ambiguity and deficiency were introduced by Panario \emph{et al.} for permutations over finite Abelian groups to measure the injectivity and surjectivity of the derivatives respectively. These authors also studied some fundamental properties and cryptographic significance of these two measures. Further studies followed without that the second pair of parameters be compared to the first one. In the present paper, we observe that ambiguity is the same parameter as NBFNB_F, up to additive and multiplicative constants (i.e. up to rescaling). We make the necessary work of comparison and unification of the results on NBFNB_F, respectively on ambiguity, which have been obtained in the five papers devoted to these parameters. We generalize some known results to any Abelian groups and we more importantly derive many new results on these parameters

    Dynamical Encoding by Networks of Competing Neuron Groups: Winnerless Competition

    Get PDF
    Following studies of olfactory processing in insects and fish, we investigate neural networks whose dynamics in phase space is represented by orbits near the heteroclinic connections between saddle regions (fixed points or limit cycles). These networks encode input information as trajectories along the heteroclinic connections. If there are N neurons in the network, the capacity is approximately e(N-1)!, i.e., much larger than that of most traditional network structures. We show that a small winnerless competition network composed of FitzHugh-Nagumo spiking neurons efficiently transforms input information into a spatiotemporal output

    Aberration in qualitative multilevel designs

    Full text link
    Generalized Word Length Pattern (GWLP) is an important and widely-used tool for comparing fractional factorial designs. We consider qualitative factors, and we code their levels using the roots of the unity. We write the GWLP of a fraction F{\mathcal F} using the polynomial indicator function, whose coefficients encode many properties of the fraction. We show that the coefficient of a simple or interaction term can be written using the counts of its levels. This apparently simple remark leads to major consequence, including a convolution formula for the counts. We also show that the mean aberration of a term over the permutation of its levels provides a connection with the variance of the level counts. Moreover, using mean aberrations for symmetric sms^m designs with ss prime, we derive a new formula for computing the GWLP of F{\mathcal F}. It is computationally easy, does not use complex numbers and also provides a clear way to interpret the GWLP. As case studies, we consider non-isomorphic orthogonal arrays that have the same GWLP. The different distributions of the mean aberrations suggest that they could be used as a further tool to discriminate between fractions.Comment: 16 pages, 1 figur

    Correlation of Automorphism Group Size and Topological Properties with Program-size Complexity Evaluations of Graphs and Complex Networks

    Get PDF
    We show that numerical approximations of Kolmogorov complexity (K) applied to graph adjacency matrices capture some group-theoretic and topological properties of graphs and empirical networks ranging from metabolic to social networks. That K and the size of the group of automorphisms of a graph are correlated opens up interesting connections to problems in computational geometry, and thus connects several measures and concepts from complexity science. We show that approximations of K characterise synthetic and natural networks by their generating mechanisms, assigning lower algorithmic randomness to complex network models (Watts-Strogatz and Barabasi-Albert networks) and high Kolmogorov complexity to (random) Erdos-Renyi graphs. We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalised version of a Block Decomposition Method (BDM) measure, based on algorithmic probability theory.Comment: 15 2-column pages, 20 figures. Forthcoming in Physica A: Statistical Mechanics and its Application
    corecore