1,382 research outputs found

    Collisional interaction limits between dark matters and baryons in `cooling flow' clusters

    Full text link
    Presuming weak collisional interactions to exchange the kinetic energy between dark matter and baryonic matter in a galaxy cluster, we re-examine the effectiveness of this process in several `cooling flow' galaxy clusters using available X-ray observations and infer an upper limit on the heavy dark matter particle (DMP)−-proton cross section σxp\sigma_{\rm xp}. With a relative collisional velocity V−V-dependent power-law form of σxp=σ0(V/103kms−1)a\sigma_{\rm xp}=\sigma_0(V/10^3 {\rm km s^{-1}})^a where a≤0a\leq 0, our inferred upper limit is \sigma_0/m_{\rm x}\lsim 2\times10^{-25} {\rm cm}^2 {\rm GeV}^{-1} with mxm_{\rm x} being the DMP mass. Based on a simple stability analysis of the thermal energy balance equation, we argue that the mechanism of DMP−-baryon collisional interactions is unlikely to be a stable nongravitational heating source of intracluster medium (ICM) in inner core regions of `cooling flow' galaxy clusters.Comment: 8 pages, 2 figures, MNRAS accepte

    Memanti­nium chloride 0.1-hydrate

    Get PDF
    The crystal structure of the title compound, C12H22N+·Cl−·0.1H2O, consists of (3,5-dimethyl-1-adamantyl)ammonium chloride (memanti­nium chloride) and uncoordinated water mol­ecules. The four six-membered rings of the memanti­nium cation assume typical chair conformations. The Cl− counter-anion links with the memanti­nium cation via N—H⋯Cl hydrogen bonding, forming channels where the disordered crystal water molecules are located. The O atom of the water mol­ecule is located on a threefold rotation axis, its two H atoms symmetrically distributed over six sites; the water mol­ecule links with the Cl− anions via O—H⋯Cl hydrogen bonding

    PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

    Full text link
    Prompts have significantly improved the performance of pretrained Large Language Models (LLMs) on various downstream tasks recently, making them increasingly indispensable for a diverse range of LLM application scenarios. However, the backdoor vulnerability, a serious security threat that can maliciously alter the victim model's normal predictions, has not been sufficiently explored for prompt-based LLMs. In this paper, we present POISONPROMPT, a novel backdoor attack capable of successfully compromising both hard and soft prompt-based LLMs. We evaluate the effectiveness, fidelity, and robustness of POISONPROMPT through extensive experiments on three popular prompt methods, using six datasets and three widely used LLMs. Our findings highlight the potential security threats posed by backdoor attacks on prompt-based LLMs and emphasize the need for further research in this area.Comment: To Appear in IEEE ICASSP 2024, code is available at: https://github.com/grasses/PoisonPromp

    Does Differential Privacy Prevent Backdoor Attacks in Practice?

    Full text link
    Differential Privacy (DP) was originally developed to protect privacy. However, it has recently been utilized to secure machine learning (ML) models from poisoning attacks, with DP-SGD receiving substantial attention. Nevertheless, a thorough investigation is required to assess the effectiveness of different DP techniques in preventing backdoor attacks in practice. In this paper, we investigate the effectiveness of DP-SGD and, for the first time in literature, examine PATE in the context of backdoor attacks. We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs. Our experiments reveal that hyperparameters and the number of backdoors in the training dataset impact the success of DP algorithms. Additionally, we propose Label-DP as a faster and more accurate alternative to DP-SGD and PATE. We conclude that while Label-DP algorithms generally offer weaker privacy protection, accurate hyper-parameter tuning can make them more effective than DP methods in defending against backdoor attacks while maintaining model accuracy
    • …
    corecore