152 research outputs found

    Dynamics for the focusing, energy-critical nonlinear Hartree equation

    Full text link
    In \cite{LiMZ:e-critical Har, MiaoXZ:09:e-critical radial Har}, the dynamics of the solutions for the focusing energy-critical Hartree equation have been classified when E(u0)<E(W)E(u_0)<E(W), where WW is the ground state. In this paper, we continue the study on the dynamics of the radial solutions with the threshold energy. Our arguments closely follow those in \cite{DuyMerle:NLS:ThresholdSolution, DuyMerle:NLW:ThresholdSolution, DuyRouden:NLS:ThresholdSolution, LiZh:NLS, LiZh:NLW}. The new ingredient is that we show that the positive solution of the nonlocal elliptic equation in L2dd−2(Rd)L^{\frac{2d}{d-2}}(\R^d) is regular and unique by the moving plane method in its global form, which plays an important role in the spectral theory of the linearized operator and the dynamics behavior of the threshold solution.Comment: 53 page

    The low regularity global solutions for the critical generalized KdV equation

    Full text link
    We prove that the Cauchy problem of the mass-critical generalized KdV equation is globally well-posed in Sobolev spaces Hs(R)H^s(\R) for s>6/13s>6/13. Of course, we require that the mass is strictly less than that of the ground state in the focusing case. The main approach is the "I-method" together with the multilinear correction analysis. Moreover, we use some "partially refined" argument to lower the upper control of the multiplier in the resonant interactions. The result improves the previous works of Fonseca, Linares, Ponce (2003) and Farah (2009).Comment: 27pages, the mistake in the previous version is corrected; using I-method with the resonant decomposition gives an improvement over our previous result

    Dimension theory of Non-Autonomous iterated function systems

    Full text link
    In the paper, we define a class of new fractals named ``non-autonomous attractors", which are the generalization of classic Moran sets and attractors of iterated function systems. Simply to say, we replace the similarity mappings by contractive mappings and remove the separation assumption in Moran structure. We give the dimension estimate for non-autonomous attractors. Furthermore, we study a class of non-autonomous attractors, named `` non-autonomous affine sets or affine sets'', where the contractions are restricted to affine mappings. To study the dimension theory of such fractals, we define two critical values s∗s^* and sAs_A, and the upper box-counting dimensions and Hausdorff dimensions of non-autonomous affine sets are bounded above by s∗s^* and sAs_A, respectively. Unlike self-affine fractals where s∗=sAs^*=s_A, we always have that s∗≥sAs^*\geq s_A, and the inequality may strictly hold. Under certain conditions, we obtain that the upper box-counting dimensions and Hausdorff dimensions of non-autonomous affine sets may equal to s∗s^* and sAs_A, respectively. In particular, we study non-autonomous affine sets with random translations, and the Hausdorff dimensions of such sets equal to sAs_A almost surely

    Global well-posedness for Schr\"odinger equation with derivative in H1/2(R)H^{{1/2}}(\R)

    Get PDF
    In this paper, we consider the Cauchy problem of the cubic nonlinear Schr\"{o}dinger equation with derivative in Hs(R)H^s(\R). This equation was known to be the local well-posedness for s≥12s\geq \frac12 (Takaoka,1999), ill-posedness for s<12s<\frac12 (Biagioni and Linares, 2001, etc.) and global well-posedness for s>12s>\frac12 (I-team, 2002). In this paper, we show that it is global well-posedness in H^{1/2(\R). The main approach is the third generation I-method combined with some additional resonant decomposition technique. The resonant decomposition is applied to control the singularity coming from the resonant interaction.Comment: 31pages; In this version, we change some expressions in Englis

    Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs

    Full text link
    The task of empowering large language models (LLMs) to accurately express their confidence, referred to as confidence elicitation, is essential in ensuring reliable and trustworthy decision-making processes. Previous methods, which primarily rely on model logits, have become less suitable for LLMs and even infeasible with the rise of closed-source LLMs (e.g., commercialized LLM APIs). This leads to a growing need to explore the untapped area of \emph{non-logit-based} approaches to estimate the uncertainty of LLMs. Hence, in this study, we investigate approaches for confidence elicitation that do not require model fine-tuning or access to proprietary information. We introduce three categories of methods: verbalize-based, consistency-based, and their hybrid methods for benchmarking, and evaluate their performance across five types of datasets and four widely-used LLMs. Our analysis of these methods uncovers several key insights: 1) LLMs often exhibit a high degree of overconfidence when verbalizing their confidence; 2) Prompting strategies such as CoT, Top-K and Multi-step confidences improve calibration of verbalized confidence; 3) Consistency-based methods outperform the verbalized confidences in most cases, with particularly notable improvements on the arithmetic reasoning task; 4) Hybrid methods consistently deliver the best performance over their baselines, thereby emerging as a promising state-of-the-art approach; 5) Despite these advancements, all investigated methods continue to struggle with challenging tasks, such as those requiring professional knowledge, leaving significant scope for improvement of confidence elicitation.Comment: 11 Page

    Learning Domain Invariant Prompt for Vision-Language Models

    Full text link
    Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples. However, although prompt learning achieves excellent performance over in-domain data, it still faces the major challenge of generalizing to unseen classes and domains. Some existing prompt learning methods tackle this issue by adaptively generating different prompts for different tokens or domains but neglecting the ability of learned prompts to generalize to unseen domains. In this paper, we propose a novel prompt learning paradigm that directly generates \emph{domain invariant} prompt that can be generalized to unseen domains, called MetaPrompt. Specifically, a dual-modality prompt tuning network is proposed to generate prompts for input from both image and text modalities. With a novel asymmetric contrastive loss, the representation from the original pre-trained vision-language model acts as supervision to enhance the generalization ability of the learned prompt. More importantly, we propose a meta-learning-based prompt tuning algorithm that explicitly constrains the task-specific prompt tuned for one domain or class to also achieve good performance in another domain or class. Extensive experiments on 11 datasets for base-to-new generalization and 4 datasets for domain generalization demonstrate that our method consistently and significantly outperforms existing methods.Comment: 12 pages, 6 figures, 5 table
    • …
    corecore