96 research outputs found

    A robust and high precision algorithm for elastic scattering problems from cornered domains

    Full text link
    The Navier equation is the governing equation of elastic waves, and computing its solution accurately and rapidly has a wide range of applications in geophysical exploration, materials science, etc. In this paper, we focus on the efficient and high-precision numerical algorithm for the time harmonic elastic wave scattering problems from cornered domains via the boundary integral equations in two dimensions. The approach is based on the combination of Nystr\"om discretization, analytical singular integrals and kernel-splitting method, which results in a high-order solver for smooth boundaries. It is then combined with the recursively compressed inverse preconditioning (RCIP) method to solve elastic scattering problems from cornered domains. Numerical experiments demonstrate that the proposed approach achieves high accuracy, with stabilized errors close to machine precision in various geometric configurations. The algorithm is further applied to investigate the asymptotic behavior of density functions associated with boundary integral operators near corners, and the numerical results are highly consistent with the theoretical formulas

    Exploring Vision Transformers as Diffusion Learners

    Full text link
    Score-based diffusion models have captured widespread attention and funded fast progress of recent vision generative tasks. In this paper, we focus on diffusion model backbone which has been much neglected before. We systematically explore vision Transformers as diffusion learners for various generative tasks. With our improvements the performance of vanilla ViT-based backbone (IU-ViT) is boosted to be on par with traditional U-Net-based methods. We further provide a hypothesis on the implication of disentangling the generative backbone as an encoder-decoder structure and show proof-of-concept experiments verifying the effectiveness of a stronger encoder for generative tasks with ASymmetriC ENcoder Decoder (ASCEND). Our improvements achieve competitive results on CIFAR-10, CelebA, LSUN, CUB Bird and large-resolution text-to-image tasks. To the best of our knowledge, we are the first to successfully train a single diffusion model on text-to-image task beyond 64x64 resolution. We hope this will motivate people to rethink the modeling choices and the training pipelines for diffusion-based generative models

    MathAttack: Attacking Large Language Models Towards Math Solving Ability

    Full text link
    With the boom of Large Language Models (LLMs), the research of solving Math Word Problem (MWP) has recently made great progress. However, there are few studies to examine the security of LLMs in math solving ability. Instead of attacking prompts in the use of LLMs, we propose a MathAttack model to attack MWP samples which are closer to the essence of security in solving math problems. Compared to traditional text adversarial attack, it is essential to preserve the mathematical logic of original MWPs during the attacking. To this end, we propose logical entity recognition to identify logical entries which are then frozen. Subsequently, the remaining text are attacked by adopting a word-level attacker. Furthermore, we propose a new dataset RobustMath to evaluate the robustness of LLMs in math solving ability. Extensive experiments on our RobustMath and two another math benchmark datasets GSM8K and MultiAirth show that MathAttack could effectively attack the math solving ability of LLMs. In the experiments, we observe that (1) Our adversarial samples from higher-accuracy LLMs are also effective for attacking LLMs with lower accuracy (e.g., transfer from larger to smaller-size LLMs, or from few-shot to zero-shot prompts); (2) Complex MWPs (such as more solving steps, longer text, more numbers) are more vulnerable to attack; (3) We can improve the robustness of LLMs by using our adversarial samples in few-shot prompts. Finally, we hope our practice and observation can serve as an important attempt towards enhancing the robustness of LLMs in math solving ability. We will release our code and dataset.Comment: 11 pages, 6 figure

    Study on Thermal Properties and Mechanical Properties of Short-cut Polyimide-Fiber Reinforced Polyphenyl Sulfone Composites

    Get PDF
    In order to increase the thermal stability and mechanical property of PPSU, two different polyimide (PI) short cut fibers reinforced polyphenyl sulfone (PPSU) composites were prepared by melt extrusion using a threescrew extruder. In addition, the effects of fiber lengths on thermal stability, heat resistance and mechanical properties of the composites was studied. The results indicate that the addition of polyimide chopped fiber can greatly improve the heat resistance of the composites. Comparing with PPSU, with the increasing of fiber content, the heat deformation temperature (HDT) of composites increased from 205 °C to 229 °C, but the addition of polyimide fiber has limited effect on the thermal stability of the composites. Meanwhile, the addition of polyimide chopped fiber can also improve the mechanical properties of the composites. Compared with PPSU, the tensile strength of composites can be increased by 102%, and the bending strength can be raised by 117%

    An Evil Backstage Manipulator: Psychological Factors Correlated with Health-Related Quality of Life in Chinese Patients with Crohn's Disease

    Get PDF
    Health-related quality of life (HRQoL) is recommended as one of essential parameters to evaluate treatment effect and clinical outcome in patients with Crohn's disease (CD). Recent studies reported that psychological factors might play a role in HRQoL in Western and American CD patients. Sufficient evidences in Chinese CD patients are still unavailable. This study is dedicated to investigate the correlation of various psychological factors with HRQoL in Chinese CD patients. We prospectively collected 40 active and 40 quiescent CD patients in China and found that psychological factors, especially neuroticism and anxiety, significantly correlate with and affect HRQoL in both active and quiescent CD groups. This is the first report revealing correlation between psychological factors and HRQoL in Chinese CD patients. Therefore, we assume that our results can contribute to a better understanding of etiology and tailoring of management in Chinese patients with Crohn's disease and are beneficial to our colleagues to compare the heterogeneous characteristics of Crohn's disease in different ethnic groups

    Learning Nonlinear Loop Invariants with Gated Continuous Logic Networks (Extended Version)

    Full text link
    Verifying real-world programs often requires inferring loop invariants with nonlinear constraints. This is especially true in programs that perform many numerical operations, such as control systems for avionics or industrial plants. Recently, data-driven methods for loop invariant inference have shown promise, especially on linear invariants. However, applying data-driven inference to nonlinear loop invariants is challenging due to the large numbers of and magnitudes of high-order terms, the potential for overfitting on a small number of samples, and the large space of possible inequality bounds. In this paper, we introduce a new neural architecture for general SMT learning, the Gated Continuous Logic Network (G-CLN), and apply it to nonlinear loop invariant learning. G-CLNs extend the Continuous Logic Network (CLN) architecture with gating units and dropout, which allow the model to robustly learn general invariants over large numbers of terms. To address overfitting that arises from finite program sampling, we introduce fractional sampling---a sound relaxation of loop semantics to continuous functions that facilitates unbounded sampling on real domain. We additionally design a new CLN activation function, the Piecewise Biased Quadratic Unit (PBQU), for naturally learning tight inequality bounds. We incorporate these methods into a nonlinear loop invariant inference system that can learn general nonlinear loop invariants. We evaluate our system on a benchmark of nonlinear loop invariants and show it solves 26 out of 27 problems, 3 more than prior work, with an average runtime of 53.3 seconds. We further demonstrate the generic learning ability of G-CLNs by solving all 124 problems in the linear Code2Inv benchmark. We also perform a quantitative stability evaluation and show G-CLNs have a convergence rate of 97.5%97.5\% on quadratic problems, a 39.2%39.2\% improvement over CLN models
    corecore