3,653 research outputs found

    The bottleneck degree of algebraic varieties

    Full text link
    A bottleneck of a smooth algebraic variety X⊂CnX \subset \mathbb{C}^n is a pair of distinct points (x,y)∈X(x,y) \in X such that the Euclidean normal spaces at xx and yy contain the line spanned by xx and yy. The narrowness of bottlenecks is a fundamental complexity measure in the algebraic geometry of data. In this paper we study the number of bottlenecks of affine and projective varieties, which we call the bottleneck degree. The bottleneck degree is a measure of the complexity of computing all bottlenecks of an algebraic variety, using for example numerical homotopy methods. We show that the bottleneck degree is a function of classical invariants such as Chern classes and polar classes. We give the formula explicitly in low dimension and provide an algorithm to compute it in the general case.Comment: Major revision. New introduction. Added some new illustrative lemmas and figures. Added pseudocode for the algorithm to compute bottleneck degree. Fixed some typo

    Deep Learning Gauss–Manin Connections

    Get PDF
    The Gauss–Manin connection of a family of hypersurfaces governs the change of the period matrix along the family. This connection can be complicated even when the equations defining the family look simple. When this is the case, it is expensive to compute the period matrices of varieties in the family via homotopy continuation. We train neural networks that can quickly and reliably guess the complexity of the Gauss–Manin connection of pencils of hypersurfaces. As an application, we compute the periods of 96 % of smooth quartic surfaces in projective 3-space whose defining equation is a sum of five monomials; from the periods of these quartic surfaces, we extract their Picard lattices and the endomorphism fields of their transcendental lattices. © 2022, The Author(s)

    Enhance Diffusion to Improve Robust Generalization

    Full text link
    Deep neural networks are susceptible to human imperceptible adversarial perturbations. One of the strongest defense mechanisms is \emph{Adversarial Training} (AT). In this paper, we aim to address two predominant problems in AT. First, there is still little consensus on how to set hyperparameters with a performance guarantee for AT research, and customized settings impede a fair comparison between different model designs in AT research. Second, the robustly trained neural networks struggle to generalize well and suffer from tremendous overfitting. This paper focuses on the primary AT framework - Projected Gradient Descent Adversarial Training (PGD-AT). We approximate the dynamic of PGD-AT by a continuous-time Stochastic Differential Equation (SDE), and show that the diffusion term of this SDE determines the robust generalization. An immediate implication of this theoretical finding is that robust generalization is positively correlated with the ratio between learning rate and batch size. We further propose a novel approach, \emph{Diffusion Enhanced Adversarial Training} (DEAT), to manipulate the diffusion term to improve robust generalization with virtually no extra computational burden. We theoretically show that DEAT obtains a tighter generalization bound than PGD-AT. Our empirical investigation is extensive and firmly attests that DEAT universally outperforms PGD-AT by a significant margin.Comment: Accepted at KDD 202
    • …
    corecore