5,219 research outputs found

    Laplacian flow for closed G_2 structures: Shi-type estimates, uniqueness and compactness

    Get PDF
    We develop foundational theory for the Laplacian flow for closed G_2 structures which will be essential for future study. (1). We prove Shi-type derivative estimates for the Riemann curvature tensor Rm and torsion tensor T along the flow, i.e. that a bound on Ξ›(x,t)=(βˆ£βˆ‡T(x,t)∣g(t)2+∣Rm(x,t)∣g(t)2)12\Lambda(x,t)=\left(|\nabla T(x,t)|_{g(t)}^2+|Rm(x,t)|_{g(t)}^2\right)^{\frac 12} will imply bounds on all covariant derivatives of Rm and T. (2). We show that Ξ›(x,t)\Lambda(x,t) will blow up at a finite-time singularity, so the flow will exist as long as Ξ›(x,t)\Lambda(x,t) remains bounded. (3). We give a new proof of forward uniqueness and prove backward uniqueness of the flow, and give some applications. (4). We prove a compactness theorem for the flow and use it to strengthen our long time existence result from (2). (5). Finally, we study compact soliton solutions of the Laplacian flow.Comment: 59 pages, v2: minor corrections and additions, accepted version for GAF

    What Are People Asking About COVID-19? A Question Classification Dataset

    Full text link
    We present COVID-Q, a set of 1,690 questions about COVID-19 from 13 sources, which we annotate into 15 question categories and 207 question clusters. The most common questions in our dataset asked about transmission, prevention, and societal effects of COVID, and we found that many questions that appeared in multiple sources were not answered by any FAQ websites of reputable organizations such as the CDC and FDA. We post our dataset publicly at https://github.com/JerryWei03/COVID-Q. For classifying questions into 15 categories, a BERT baseline scored 58.1% accuracy when trained on 20 examples per category, and for a question clustering task, a BERT + triplet loss baseline achieved 49.5% accuracy. We hope COVID-Q can help either for direct use in developing applied systems or as a domain-specific resource for model evaluation.Comment: Published in Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 202

    Label Noise Reduction Without Assumptions

    Get PDF
    We propose an algorithm for training neural networks in noisy label scenarios that up-weighs per-example gradients that are more similar to other gradients in the same minibatch. Our approach makes no assumptions about the amount or type of label noise, does not use a held-out validation set of clean examples, makes relatively few computations, and only modifies the minibatch gradient aggregation module in a typical neural network training workflow. For CIFAR-10 classification with varying levels of label noise, our method successfully up-weighs clean examples and de-prioritizes noisy examples, showing consistent improvement over a vanilla training baseline. Our results open the door to potential future work involving per-example gradient comparisons
    • …
    corecore