2,594 research outputs found

    Wheels: A New Criterion for Non-convexity of Neural Codes

    Full text link
    We introduce new geometric and combinatorial criteria that preclude a neural code from being convex, and use them to tackle the classification problem for codes on six neurons. Along the way, we give the first example of a code that is non-convex, has no local obstructions, and has simplicial complex of dimension two. We also characterize convexity for neural codes for which the simplicial complex is pure of low or high dimension.Comment: 25 pages, 3 figures, 2 table

    Nondegenerate Neural Codes and Obstructions to Closed-Convexity

    Full text link
    Previous work on convexity of neural codes has produced codes that are open-convex but not closed-convex -- or vice-versa. However, why a code is one but not the other, and how to detect such discrepancies are open questions. We tackle these questions in two ways. First, we investigate the concept of degeneracy introduced by Cruz et al., and extend their results to show that nondegeneracy precisely captures the situation when taking closures or interiors of open or closed realizations, respectively, yields another realization of the code. Second, we give the first general criteria for precluding a code from being closed-convex (without ruling out open-convexity), unifying ad-hoc geometric arguments in prior works. One criterion is built on a phenomenon we call a rigid structure, while the other can be stated algebraically, in terms of the neural ideal of the code. These results complement existing criteria having the opposite purpose: precluding open-convexity but not closed-convexity. Finally, we show that a family of codes shown by Jeffs to be not open-convex is in fact closed-convex and realizable in dimension two.Comment: 32 pages, 12 figures. Corrected Examples 4.32 and 5.8, added two figures to aid in understanding proofs, improved exposition throughout, and corrected typo

    Combinatorial geometry of neural codes, neural data analysis, and neural networks

    Full text link
    This dissertation explores applications of discrete geometry in mathematical neuroscience. We begin with convex neural codes, which model the activity of hippocampal place cells and other neurons with convex receptive fields. In Chapter 4, we introduce order-forcing, a tool for constraining convex realizations of codes, and use it to construct new examples of non-convex codes with no local obstructions. In Chapter 5, we relate oriented matroids to convex neural codes, showing that a code has a realization with convex polytopes iff it is the image of a representable oriented matroid under a neural code morphism. We also show that determining whether a code is convex is at least as difficult as determining whether an oriented matroid is representable, implying that the problem of determining whether a code is convex is NP-hard. Next, we turn to the problem of the underlying rank of a matrix. This problem is motivated by the problem of determining the dimensionality of (neural) data which has been corrupted by an unknown monotone transformation. In Chapter 6, we introduce two tools for computing underlying rank, the minimal nodes and the Radon rank. We apply these to analyze calcium imaging data from a larval zebrafish. In Chapter 7, we explore the underlying rank in more detail, establish connections to oriented matroid theory, and show that computing underlying rank is also NP-hard. Finally, we study the dynamics of threshold-linear networks (TLNs), a simple model of the activity of neural circuits. In Chapter 9, we describe the nullcline arrangement of a threshold linear network, and show that a subset of its chambers are an attracting set. In Chapter 10, we focus on combinatorial threshold linear networks (CTLNs), which are TLNs defined from a directed graph. We prove that if the graph of a CTLN is a directed acyclic graph, then all trajectories of the CTLN approach a fixed point.Comment: 193 pages, 69 figure

    Oriented Matroids and Combinatorial Neural Codes

    Full text link
    A combinatorial neural code CβŠ†2[n]\mathscr C\subseteq 2^{[n]} is convex if it arises as the intersection pattern of convex open subsets of Rd\mathbb R^d. We relate the emerging theory of convex neural codes to the established theory of oriented matroids, both categorically and with respect to geometry and computational complexity. On the categorical side, we show that the map taking an acyclic oriented matroid to the code of positive parts of its topes is a faithful functor. We adapt the oriented matroid ideal introduced by Novik, Postnikov, and Sturmfels into a functor from the category of oriented matroids to the category of rings; then, we show that the resulting ring maps naturally to the neural ring of the matroid's neural code. For geometry and computational complexity, we show that a code has a realization with convex polytopes if and only if it lies below the code of a representable oriented matroid in the partial order of codes introduced by Jeffs. We show that previously published examples of non-convex codes do not lie below any oriented matroids, and we construct examples of non-convex codes lying below non-representable oriented matroids. By way of this construction, we can apply Mn\"{e}v-Sturmfels universality to show that deciding whether a combinatorial code is convex is NP-hard

    Neural ring homomorphism preserves mandatory sets required for open convexity

    Full text link
    It has been studied by Curto et al. (SIAM J. on App. Alg. and Geom., 1(1) : 222 \unicode{x2013} 238, 2017) that a neural code that has an open convex realization does not have any local obstruction relative to the neural code. Further, a neural code C \mathcal{C} has no local obstructions if and only if it contains the set of mandatory codewords, Cmin⁑(Ξ”), \mathcal{C}_{\min}(\Delta), which depends only on the simplicial complex Ξ”=Ξ”(C)\Delta=\Delta(\mathcal{C}). Thus if CβŠ‡ΜΈCmin⁑(Ξ”)\mathcal{C} \not \supseteq \mathcal{C}_{\min}(\Delta), then C\mathcal{C} cannot be open convex. However, the problem of constructing Cmin⁑(Ξ”) \mathcal{C}_{\min}(\Delta) for any given code C \mathcal{C} is undecidable. There is yet another way to capture the local obstructions via the homological mandatory set, MH(Ξ”). \mathcal{M}_H(\Delta). The significance of MH(Ξ”) \mathcal{M}_H(\Delta) for a given code C \mathcal{C} is that MH(Ξ”)βŠ†Cmin⁑(Ξ”) \mathcal{M}_H(\Delta) \subseteq \mathcal{C}_{\min}(\Delta) and so C \mathcal{C} will have local obstructions if CβŠ‡ΜΈMH(Ξ”). \mathcal{C}\not\supseteq\mathcal{M}_H(\Delta). In this paper we study the affect on the sets Cmin⁑(Ξ”)\mathcal{C}_{\min}(\Delta) and MH(Ξ”)\mathcal{M}_H(\Delta) under the action of various surjective elementary code maps. Further, we study the relationship between Stanley-Reisner rings of the simplicial complexes associated with neural codes of the elementary code maps. Moreover, using this relationship, we give an alternative proof to show that MH(Ξ”) \mathcal{M}_H(\Delta) is preserved under the elementary code maps
    • …
    corecore