19 research outputs found

    On the stability of sets of even type

    Get PDF

    Small weight codewords of projective geometric codes II

    Full text link
    The pp-ary linear code Ck(n,q)\mathcal C_{k}(n,q) is defined as the row space of the incidence matrix AA of kk-spaces and points of PG(n,q)\text{PG}(n,q). It is known that if qq is square, a codeword of weight qkq+O(qk−1)q^k\sqrt{q}+\mathcal O \left( q^{k-1} \right) exists that cannot be written as a linear combination of at most q\sqrt{q} rows of AA. Over the past few decades, researchers have put a lot of effort towards proving that any codeword of smaller weight does meet this property. We show that if q⩾32 q \geqslant 32 is a composite prime power, every codeword of Ck(n,q)\mathcal C_k(n,q) up to weight O(qkq)\mathcal O \left( {q^k\sqrt{q}} \right) is a linear combination of at most q\sqrt{q} rows of AA. We also generalise this result to the codes Cj,k(n,q)\mathcal C_{j,k}(n,q) , which are defined as the pp-ary row span of the incidence matrix of kk-spaces and jj-spaces, j<kj < k.Comment: 22 page

    Stability of k mod p multisets and small weight codewords of the code generated by the lines of PG(2, q)

    No full text
    In this paper, we prove a stability result on k mod p multisets of points in PG(2,q), q = p^h. The particular case k=0 is used to describe small weight codewords of the code generated by the lines of PG(2, q), as linear combination of few lines. Earlier results proved this for codewords with weight less than 2.5q, while our result is valid until cq sqrt(q). It is sharp when 27<q square and h>=4. When q is a prime, De Boeck and Vandendriessche constructed a codeword of weight 3p-3 that is not the linear combination of three lines. We characterise their example

    Large blocking sets in PG(2,q2)

    Get PDF

    Intertwined results on linear codes and Galois geometries

    Get PDF

    Q(sqrt(-3))-Integral Points on a Mordell Curve

    Get PDF
    We use an extension of quadratic Chabauty to number fields,recently developed by the author with Balakrishnan, Besser and M ̈uller,combined with a sieving technique, to determine the integral points overQ(√−3) on the Mordell curve y2 = x3 − 4

    Intersection problems in finite geometries

    Get PDF

    Neural function approximation on graphs: shape modelling, graph discrimination & compression

    Get PDF
    Graphs serve as a versatile mathematical abstraction of real-world phenomena in numerous scientific disciplines. This thesis is part of the Geometric Deep Learning subject area, a family of learning paradigms, that capitalise on the increasing volume of non-Euclidean data so as to solve real-world tasks in a data-driven manner. In particular, we focus on the topic of graph function approximation using neural networks, which lies at the heart of many relevant methods. In the first part of the thesis, we contribute to the understanding and design of Graph Neural Networks (GNNs). Initially, we investigate the problem of learning on signals supported on a fixed graph. We show that treating graph signals as general graph spaces is restrictive and conventional GNNs have limited expressivity. Instead, we expose a more enlightening perspective by drawing parallels between graph signals and signals on Euclidean grids, such as images and audio. Accordingly, we propose a permutation-sensitive GNN based on an operator analogous to shifts in grids and instantiate it on 3D meshes for shape modelling (Spiral Convolutions). Following, we focus on learning on general graph spaces and in particular on functions that are invariant to graph isomorphism. We identify a fundamental trade-off between invariance, expressivity and computational complexity, which we address with a symmetry-breaking mechanism based on substructure encodings (Graph Substructure Networks). Substructures are shown to be a powerful tool that provably improves expressivity while controlling computational complexity, and a useful inductive bias in network science and chemistry. In the second part of the thesis, we discuss the problem of graph compression, where we analyse the information-theoretic principles and the connections with graph generative models. We show that another inevitable trade-off surfaces, now between computational complexity and compression quality, due to graph isomorphism. We propose a substructure-based dictionary coder - Partition and Code (PnC) - with theoretical guarantees that can be adapted to different graph distributions by estimating its parameters from observations. Additionally, contrary to the majority of neural compressors, PnC is parameter and sample efficient and is therefore of wide practical relevance. Finally, within this framework, substructures are further illustrated as a decisive archetype for learning problems on graph spaces.Open Acces
    corecore