179 research outputs found

    The distinctions between Λ\LambdaCDM and f(T)f(T) gravity according Noether symmetry

    Full text link
    Noether's theory offers us a useful tool to research the conserved quantities and symmetries of the modified gravity theories, among which the f(T)f(T) theory, a generally modified teleparallel gravity, has been proposed to account for the dark energy phenomena. By the Noether symmetry approach, we investigate the power-law, exponential and polynomial forms of f(T)f(T) theories. All forms of f(T)f(T) concerned in this work possess the time translational symmetry, which is related with energy condition or Hamilton constraint. In addition, we find out that the performances of the power-law and exponential forms are not pleasing. It is rational adding a linear term TT to TnT^n as the most efficient amendment to resemble the teleparallel gravity or General Relativity on small scales, ie., the scale of the solar system. The corresponding Noether symmetry indicates that only time translational symmetry remains. Through numerically calculations and observational data-sets constraining, the optimal form αT+βT−1\alpha T + \beta T^{-1} is obtained, whose cosmological solution resembles the standard Λ\LambdaCDM best with lightly reduced cosmic age which can be alleviated by introducing another TmT^m term. More important is that we find the significant differences between Λ\LambdaCDM and f(T)f(T) gravity. The Λ\LambdaCDM model has also two additional symmetries and corresponding positive conserved quantities, except the two negative conserved quantities.Comment: 9 pages, 5 figures, 2 tables, typos corrected, Refs. added, accepted by EPJ-

    Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning

    Full text link
    Recent studies have revealed the intriguing few-shot learning ability of pretrained language models (PLMs): They can quickly adapt to a new task when fine-tuned on a small amount of labeled data formulated as prompts, without requiring abundant task-specific annotations. Despite their promising performance, most existing few-shot approaches that only learn from the small training set still underperform fully supervised training by nontrivial margins. In this work, we study few-shot learning with PLMs from a different perspective: We first tune an autoregressive PLM on the few-shot samples and then use it as a generator to synthesize a large amount of novel training samples which augment the original training set. To encourage the generator to produce label-discriminative samples, we train it via weighted maximum likelihood where the weight of each token is automatically adjusted based on a discriminative meta-learning objective. A classification PLM can then be fine-tuned on both the few-shot and the synthetic samples with regularization for better generalization and stability. Our approach FewGen achieves an overall better result across seven classification tasks of the GLUE benchmark than existing few-shot learning methods, improving no-augmentation methods by 5+ average points, and outperforming augmentation methods by 3+ average points.Comment: Code: https://github.com/yumeng5/FewGe

    Guiding Corpus-based Set Expansion by Auxiliary Sets Generation and Co-Expansion

    Full text link
    Given a small set of seed entities (e.g., ``USA'', ``Russia''), corpus-based set expansion is to induce an extensive set of entities which share the same semantic class (Country in this example) from a given corpus. Set expansion benefits a wide range of downstream applications in knowledge discovery, such as web search, taxonomy construction, and query suggestion. Existing corpus-based set expansion algorithms typically bootstrap the given seeds by incorporating lexical patterns and distributional similarity. However, due to no negative sets provided explicitly, these methods suffer from semantic drift caused by expanding the seed set freely without guidance. We propose a new framework, Set-CoExpan, that automatically generates auxiliary sets as negative sets that are closely related to the target set of user's interest, and then performs multiple sets co-expansion that extracts discriminative features by comparing target set with auxiliary sets, to form multiple cohesive sets that are distinctive from one another, thus resolving the semantic drift issue. In this paper we demonstrate that by generating auxiliary sets, we can guide the expansion process of target set to avoid touching those ambiguous areas around the border with auxiliary sets, and we show that Set-CoExpan outperforms strong baseline methods significantly.Comment: WWW 202
    • …
    corecore