9 research outputs found

    Embedding dimension gaps in sparse codes

    Full text link
    We study the open and closed embedding dimensions of a convex 3-sparse code FP\mathcal{FP}, which records the intersection pattern of lines in the Fano plane. We show that the closed embedding dimension of FP\mathcal{FP} is three, and the open embedding dimension is between four and six, providing the first example of a 3-sparse code with closed embedding dimension three and differing open and closed embedding dimensions. We also investigate codes whose canonical form is quadratic, i.e. ``degree two" codes. We show that such codes are realizable by axis-parallel boxes, generalizing a recent result of Zhou on inductively pierced codes. We pose several open questions regarding sparse and low-degree codes. In particular, we conjecture that the open embedding dimension of certain 3-sparse codes derived from Steiner triple systems grows to infinity.Comment: 16 pages, 8 figure

    Nondegenerate Neural Codes and Obstructions to Closed-Convexity

    Full text link
    Previous work on convexity of neural codes has produced codes that are open-convex but not closed-convex -- or vice-versa. However, why a code is one but not the other, and how to detect such discrepancies are open questions. We tackle these questions in two ways. First, we investigate the concept of degeneracy introduced by Cruz et al., and extend their results to show that nondegeneracy precisely captures the situation when taking closures or interiors of open or closed realizations, respectively, yields another realization of the code. Second, we give the first general criteria for precluding a code from being closed-convex (without ruling out open-convexity), unifying ad-hoc geometric arguments in prior works. One criterion is built on a phenomenon we call a rigid structure, while the other can be stated algebraically, in terms of the neural ideal of the code. These results complement existing criteria having the opposite purpose: precluding open-convexity but not closed-convexity. Finally, we show that a family of codes shown by Jeffs to be not open-convex is in fact closed-convex and realizable in dimension two.Comment: 32 pages, 12 figures. Corrected Examples 4.32 and 5.8, added two figures to aid in understanding proofs, improved exposition throughout, and corrected typo

    Oriented Matroids and Combinatorial Neural Codes

    Full text link
    A combinatorial neural code C⊆2[n]\mathscr C\subseteq 2^{[n]} is convex if it arises as the intersection pattern of convex open subsets of Rd\mathbb R^d. We relate the emerging theory of convex neural codes to the established theory of oriented matroids, both categorically and with respect to geometry and computational complexity. On the categorical side, we show that the map taking an acyclic oriented matroid to the code of positive parts of its topes is a faithful functor. We adapt the oriented matroid ideal introduced by Novik, Postnikov, and Sturmfels into a functor from the category of oriented matroids to the category of rings; then, we show that the resulting ring maps naturally to the neural ring of the matroid's neural code. For geometry and computational complexity, we show that a code has a realization with convex polytopes if and only if it lies below the code of a representable oriented matroid in the partial order of codes introduced by Jeffs. We show that previously published examples of non-convex codes do not lie below any oriented matroids, and we construct examples of non-convex codes lying below non-representable oriented matroids. By way of this construction, we can apply Mn\"{e}v-Sturmfels universality to show that deciding whether a combinatorial code is convex is NP-hard

    Algebraic properties of neural codes.

    Get PDF
    The neural rings and ideals as algebraic tools for analyzing the intrinsic structure of neural codes were introduced by C. Curto, V. Itskov, A. Veliz-Cuba, and N. Youngs in 2013. Since then they have been investigated in several papers, including the 2017 paper by S. G\ unt\ urk\ un, J. Jeffries, and J. Sun, in which the notion of polarization of neural ideals was introduced. We extend their ideas by introducing the polarization of motifs and neural codes, and show that these notions have very nice properties which allow the studying of the intrinsic structure of neural codes of length nn via the square-free monomial ideals in 2n2n variables. As a result, we can obtain minimal prime ideals in 2n2n variables which do not come from the polarization of any motifs of length nn. For this reason, we introduce the notions for a partial code, including partial motifs and inactive neurons. With these notions, we are able to relate those non-polar primes back to the original neural code. Additionally, we reformulate an existing theorem and provide a shorter, simpler proof. We also give intrinsic characterizations of neural rings and the homomorphisms between them. We characterize monomial code maps as the composition of basic monomial code maps. This work is based on two theorems, introduced by C. Curto and N. Youngs in 2015, and the notions of a trunk and a monomial map between two neural codes, introduced by R. A. Jeffs in 2018

    New Tools for Classifying Convex Neural Codes: The Factor Complex and the Wheel

    Get PDF
    The neural code has prompted many questions in pure mathematics concerning how much topological data can be stored combinatorially. The question of whether one can determine the convexity of a neural code is particularly prominent. In this dissertation, we provide new tools toward answering this question. First, we introduce a related object called the factor complex, and show how it encodes a property of the neural code called max-intersection-completeness. Second, we introduce a new type of nonconvex phenomenon called a wheel, and show how to read it combinatorially
    corecore