10,066 research outputs found

    Existence and asymptotic behavior of least energy sign-changing solutions for Schrodinger-Poisson systems with doubly critical exponents

    Full text link
    In this paper, we are concerned with the following Schr\"{o}dinger-Poisson system with critical nonlinearity and critical nonlocal term due to the Hardy-Littlewood-Sobolev inequality \begin{equation}\begin{cases} -\Delta u+u+\lambda\phi |u|^3u =|u|^4u+ |u|^{q-2}u,\ \ &\ x \in \mathbb{R}^{3},\\[2mm] -\Delta \phi=|u|^5, \ \ &\ x \in \mathbb{R}^{3}, \end{cases} \end{equation} where λ∈R\lambda\in \mathbb{R} is a parameter and q∈(2,6)q\in(2,6). If λ≥(q+28)2\lambda\ge (\frac{q+2}{8})^2 and q∈(2,6)q\in(2,6), the above system has no nontrivial solution. If λ∈(λ∗,0)\lambda\in (\lambda^*,0) for some λ∗<0\lambda^*<0, we obtain a least energy radial sign-changing solution uλu_\lambda to the above system. Furthermore, we consider λ\lambda as a parameter and analyze the asymptotic behavior of uλu_\lambda as λ→0−\lambda\to 0^-

    Identical and Fraternal Twins: Fine-Grained Semantic Contrastive Learning of Sentence Representations

    Full text link
    The enhancement of unsupervised learning of sentence representations has been significantly achieved by the utility of contrastive learning. This approach clusters the augmented positive instance with the anchor instance to create a desired embedding space. However, relying solely on the contrastive objective can result in sub-optimal outcomes due to its inability to differentiate subtle semantic variations between positive pairs. Specifically, common data augmentation techniques frequently introduce semantic distortion, leading to a semantic margin between the positive pair. While the InfoNCE loss function overlooks the semantic margin and prioritizes similarity maximization between positive pairs during training, leading to the insensitive semantic comprehension ability of the trained model. In this paper, we introduce a novel Identical and Fraternal Twins of Contrastive Learning (named IFTCL) framework, capable of simultaneously adapting to various positive pairs generated by different augmentation techniques. We propose a \textit{Twins Loss} to preserve the innate margin during training and promote the potential of data enhancement in order to overcome the sub-optimal issue. We also present proof-of-concept experiments combined with the contrastive objective to prove the validity of the proposed Twins Loss. Furthermore, we propose a hippocampus queue mechanism to restore and reuse the negative instances without additional calculation, which further enhances the efficiency and performance of the IFCL. We verify the IFCL framework on nine semantic textual similarity tasks with both English and Chinese datasets, and the experimental results show that IFCL outperforms state-of-the-art methods.Comment: This article has been accepted for publication in European Conference on Artificial Intelligence (ECAI2023). 9 pages, 4 figure

    Topic-DPR: Topic-based Prompts for Dense Passage Retrieval

    Full text link
    Prompt-based learning's efficacy across numerous natural language processing tasks has led to its integration into dense passage retrieval. Prior research has mainly focused on enhancing the semantic understanding of pre-trained language models by optimizing a single vector as a continuous prompt. This approach, however, leads to a semantic space collapse; identical semantic information seeps into all representations, causing their distributions to converge in a restricted region. This hinders differentiation between relevant and irrelevant passages during dense retrieval. To tackle this issue, we present Topic-DPR, a dense passage retrieval model that uses topic-based prompts. Unlike the single prompt method, multiple topic-based prompts are established over a probabilistic simplex and optimized simultaneously through contrastive learning. This encourages representations to align with their topic distributions, improving space uniformity. Furthermore, we introduce a novel positive and negative sampling strategy, leveraging semi-structured data to boost dense retrieval efficiency. Experimental results from two datasets affirm that our method surpasses previous state-of-the-art retrieval techniques.Comment: Findings of EMNLP 202
    • …
    corecore