256 research outputs found

    Conditional Lie-Bäcklund Symmetries and Reductions of the Nonlinear Diffusion Equations with Source

    Get PDF
    Conditional Lie-Bäcklund symmetry approach is used to study the invariant subspace of the nonlinear diffusion equations with source ut=e−qx(epxP(u)uxm)x+Q(x,u), m≠1. We obtain a complete list of canonical forms for such equations admit multidimensional invariant subspaces determined by higher order conditional Lie-Bäcklund symmetries. The resulting equations are either solved exactly or reduced to some finite-dimensional dynamic systems

    FoxM1B transcriptionally regulates vascular endothelial growth factor expression and promotes the angiogenesis and growth of glioma cells.

    Get PDF
    We previously found that FoxM1B is overexpressed in human glioblastomas and that forced FoxM1B expression in anaplastic astrocytoma cells leads to the formation of highly angiogenic glioblastoma in nude mice. However, the molecular mechanisms by which FoxM1B enhances glioma angiogenesis are currently unknown. In this study, we found that vascular endothelial growth factor (VEGF) is a direct transcriptional target of FoxM1B. FoxM1B overexpression increased VEGF expression, whereas blockade of FoxM1 expression suppressed VEGF expression in glioma cells. Transfection of FoxM1 into glioma cells directly activated the VEGF promoter, and inhibition of FoxM1 expression by FoxM1 siRNA suppressed VEGF promoter activation. We identified two FoxM1-binding sites in the VEGF promoter that specifically bound to the FoxM1 protein. Mutation of these FoxM1-binding sites significantly attenuated VEGF promoter activity. Furthermore, FoxM1 overexpression increased and inhibition of FoxM1 expression suppressed the angiogenic ability of glioma cells. Finally, an immunohistochemical analysis of 59 human glioblastoma specimens also showed a significant correlation between FoxM1 overexpression and elevated VEGF expression. Our findings provide both clinical and mechanistic evidence that FoxM1 contributes to glioma progression by enhancing VEGF gene transcription and thus tumor angiogenesis

    Pre-trained transformer for adversarial purification

    Full text link
    With more and more deep neural networks being deployed as various daily services, their reliability is essential. It's frightening that deep neural networks are vulnerable and sensitive to adversarial attacks, the most common one of which for the services is evasion-based. Recent works usually strengthen the robustness by adversarial training or leveraging the knowledge of an amount of clean data. However, in practical terms, retraining and redeploying the model need a large computational budget, leading to heavy losses to the online service. In addition, when adversarial examples of a certain attack are detected, only limited adversarial examples are available for the service provider, while much clean data may not be accessible. Given the mentioned problems, we propose a new scenario, RaPiD (Rapid Plug-in Defender), which is to rapidly defend against a certain attack for the frozen original service model with limitations of few clean and adversarial examples. Motivated by the generalization and the universal computation ability of pre-trained transformer models, we come up with a new defender method, CeTaD, which stands for Considering Pre-trained Transformers as Defenders. In particular, we evaluate the effectiveness and the transferability of CeTaD in the case of one-shot adversarial examples and explore the impact of different parts of CeTaD as well as training data conditions. CeTaD is flexible, able to be embedded into an arbitrary differentiable model, and suitable for various types of attacks

    Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling

    Full text link
    Uncertainty decomposition refers to the task of decomposing the total uncertainty of a model into data (aleatoric) uncertainty, resulting from the inherent complexity or ambiguity of the data, and model (epistemic) uncertainty, resulting from the lack of knowledge in the model. Performing uncertainty decomposition for large language models (LLMs) is an important step toward improving the reliability, trustworthiness, and interpretability of LLMs, but this research task is very challenging and remains unresolved. The existing canonical method, Bayesian Neural Network (BNN), cannot be applied to LLMs, because BNN requires training and ensembling multiple variants of models, which is infeasible or prohibitively expensive for LLMs. In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarifications ensemble, which bypasses the need to train new models. Rather than ensembling models with different parameters, our approach generates a set of clarifications for the input, feeds them into the fixed LLMs, and ensembles the corresponding predictions. We show that our framework shares a symmetric decomposition structure with BNN. Empirical evaluations demonstrate that the proposed framework provides accurate and reliable uncertainty quantification on various tasks. Code will be made publicly available at https://github.com/UCSB-NLP-Chang/llm_uncertainty .Comment: 15 pages, 3 figure

    Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synthesis

    Full text link
    Diffusion-based models have achieved state-of-the-art performance on text-to-image synthesis tasks. However, one critical limitation of these models is the low fidelity of generated images with respect to the text description, such as missing objects, mismatched attributes, and mislocated objects. One key reason for such inconsistencies is the inaccurate cross-attention to text in both the spatial dimension, which controls at what pixel region an object should appear, and the temporal dimension, which controls how different levels of details are added through the denoising steps. In this paper, we propose a new text-to-image algorithm that adds explicit control over spatial-temporal cross-attention in diffusion models. We first utilize a layout predictor to predict the pixel regions for objects mentioned in the text. We then impose spatial attention control by combining the attention over the entire text description and that over the local description of the particular object in the corresponding pixel region of that object. The temporal attention control is further added by allowing the combination weights to change at each denoising step, and the combination weights are optimized to ensure high fidelity between the image and the text. Experiments show that our method generates images with higher fidelity compared to diffusion-model-based baselines without fine-tuning the diffusion model. Our code is publicly available at https://github.com/UCSB-NLP-Chang/Diffusion-SpaceTime-Attn.Comment: 20 pages, 16 figure
    • …
    corecore