28,764 research outputs found

    On the Ingleton-Violating Finite Groups

    Get PDF
    Given n discrete random variables, its entropy vector is the 2^n - 1-dimensional vector obtained from the joint entropies of all non-empty subsets of the random variables. It is well known that there is a close relation between such an entropy vector and a certain group-characterizable vector obtained from a finite group and n of its subgroups; indeed, roughly speaking, knowing the region of all such group-characterizable vectors is equivalent to knowing the region of all entropy vectors. This correspondence may be useful for characterizing the space of entropic vectors and for designing network codes. If one restricts attention to abelian groups then not all entropy vectors can be obtained. This is an explanation for the fact shown by Dougherty et al. that linear network codes cannot achieve capacity in general network coding problems (since linear network codes come from abelian groups). All abelian group-characterizable vectors, and by fiat all entropy vectors generated by linear network codes, satisfy a linear inequality called the Ingleton inequality. General entropy vectors, however, do not necessarily have this property. It is, therefore, of interest to identify groups that violate the Ingleton inequality. In this paper, we study the problem of finding nonabelian finite groups that yield characterizable vectors, which violate the Ingleton inequality. Using a refined computer search, we find the symmetric group S_5 to be the smallest group that violates the Ingleton inequality. Careful study of the structure of this group, and its subgroups, reveals that it belongs to the Ingleton-violating family PGL(2,q) with a prime power q ≥ 5 , i.e., the projective group of 2×2 nonsingular matrices with entries in F_q . We further interpret this family of groups, and their subgroups, using the theory of group actions and identify the subgroups as certain stabilizers. We also extend the construction to more general groups such as PGL(n,q) and GL(n,q) . The families of groups identified here are therefore good candidates for constructing network codes more powerful than linear network codes, and we discuss some considerations for constructing such group network codes

    Scalar Linear Network Coding for Networks with Two Sources

    Get PDF
    Determining the capacity of networks has been a long-standing issue of interest in the literature. Although for multi-source multi-sink networks it is known that using network coding is advantageous over traditional routing, finding the best coding strategy is not trivial in general. Among different classes of codes that could be potentially used in a network, linear codes due to their simplicity are of particular interest. Although linear codes are proven to be sub-optimal in general, in some cases such as the multicast scenario they achieve the cut-set bound. Since determining the capacity of a network is closely related to the characterization of the entropy region of all its random variables, if one is interested in finding the best linear solution for a network, one should find the region of all linear representable entropy vectors of that network. With this approach, we study the scalar linear solutions over arbitrary network problems with two sources. We explicitly calculate this region for small number of variables and suggest a method for larger networks through finding the best scalar linear solution to a storage problem as an example of practical interest

    Network vector quantization

    Get PDF
    We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting "network vector quantizers" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples

    SUBIC: A supervised, structured binary code for image search

    Full text link
    For large-scale visual search, highly compressed yet meaningful representations of images are essential. Structured vector quantizers based on product quantization and its variants are usually employed to achieve such compression while minimizing the loss of accuracy. Yet, unlike binary hashing schemes, these unsupervised methods have not yet benefited from the supervision, end-to-end learning and novel architectures ushered in by the deep learning revolution. We hence propose herein a novel method to make deep convolutional neural networks produce supervised, compact, structured binary codes for visual search. Our method makes use of a novel block-softmax non-linearity and of batch-based entropy losses that together induce structure in the learned encodings. We show that our method outperforms state-of-the-art compact representations based on deep hashing or structured quantization in single and cross-domain category retrieval, instance retrieval and classification. We make our code and models publicly available online.Comment: Accepted at ICCV 2017 (Spotlight

    The Self-Organization of Meaning and the Reflexive Communication of Information

    Get PDF
    Following a suggestion of Warren Weaver, we extend the Shannon model of communication piecemeal into a complex systems model in which communication is differentiated both vertically and horizontally. This model enables us to bridge the divide between Niklas Luhmann's theory of the self-organization of meaning in communications and empirical research using information theory. First, we distinguish between communication relations and correlations among patterns of relations. The correlations span a vector space in which relations are positioned and can be provided with meaning. Second, positions provide reflexive perspectives. Whereas the different meanings are integrated locally, each instantiation opens global perspectives--"horizons of meaning"--along eigenvectors of the communication matrix. These next-order codifications of meaning can be expected to generate redundancies when interacting in instantiations. Increases in redundancy indicate new options and can be measured as local reduction of prevailing uncertainty (in bits). The systemic generation of new options can be considered as a hallmark of the knowledge-based economy.Comment: accepted for publication in Social Science Information, March 21, 201
    • …
    corecore