129 research outputs found
Leakage Analysis and Solution of the RFID Analog Front-END
The identification and modeling of different leakage components are very important for estimation and reduction of leakage power, especially low-power applications, such as RFID chip. This paper proposes a theory about leakage mechanism of RFID chip and proves the theory. The one contribution of the paper is the proposed theory about leakage mechanism of RFID chip. The other contribution is that it proves the differences between tape-out verification results and computer simulation results and that to what degree the differences occur for different circuits. And when the source potential is much lower than the substrate potential, tape-out verification results and computer simulation results have larger differences. The test results show that the actual leakage power increases 26.3 times compares with the computer simulation results’ when the source potential is -750mV
Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery
While self-supervised graph pretraining techniques have shown promising
results in various domains, their application still experiences challenges of
limited topology learning, human knowledge dependency, and incompetent
multi-level interactions. To address these issues, we propose a novel solution,
Dual-level Graph self-supervised Pretraining with Motif discovery (DGPM), which
introduces a unique dual-level pretraining structure that orchestrates
node-level and subgraph-level pretext tasks. Unlike prior approaches, DGPM
autonomously uncovers significant graph motifs through an edge pooling module,
aligning learned motif similarities with graph kernel-based similarities. A
cross-matching task enables sophisticated node-motif interactions and novel
representation learning. Extensive experiments on 15 datasets validate DGPM's
effectiveness and generalizability, outperforming state-of-the-art methods in
unsupervised representation learning and transfer learning settings. The
autonomously discovered motifs demonstrate the potential of DGPM to enhance
robustness and interpretability.Comment: 14 pages, 6 figures, accepted by AAAI'2
Goal-Oriented Prompt Attack and Safety Evaluation for LLMs
Large Language Models (LLMs) presents significant priority in text
understanding and generation. However, LLMs suffer from the risk of generating
harmful contents especially while being employed to applications. There are
several black-box attack methods, such as Prompt Attack, which can change the
behaviour of LLMs and induce LLMs to generate unexpected answers with harmful
contents. Researchers are interested in Prompt Attack and Defense with LLMs,
while there is no publicly available dataset with high successful attacking
rate to evaluate the abilities of defending prompt attack. In this paper, we
introduce a pipeline to construct high-quality prompt attack samples, along
with a Chinese prompt attack dataset called CPAD. Our prompts aim to induce
LLMs to generate unexpected outputs with several carefully designed prompt
attack templates and widely concerned attacking contents. Different from
previous datasets involving safety estimation, we construct the prompts
considering three dimensions: contents, attacking methods and goals.
Especially, the attacking goals indicate the behaviour expected after
successfully attacking the LLMs, thus the responses can be easily evaluated and
analysed. We run several popular Chinese LLMs on our dataset, and the results
show that our prompts are significantly harmful to LLMs, with around 70% attack
success rate to GPT-3.5. CPAD is publicly available at
https://github.com/liuchengyuan123/CPAD
Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models
Neural text ranking models have witnessed significant advancement and are
increasingly being deployed in practice. Unfortunately, they also inherit
adversarial vulnerabilities of general neural models, which have been detected
but remain underexplored by prior studies. Moreover, the inherit adversarial
vulnerabilities might be leveraged by blackhat SEO to defeat better-protected
search engines. In this study, we propose an imitation adversarial attack on
black-box neural passage ranking models. We first show that the target passage
ranking model can be transparentized and imitated by enumerating critical
queries/candidates and then train a ranking imitation model. Leveraging the
ranking imitation model, we can elaborately manipulate the ranking results and
transfer the manipulation attack to the target ranking model. For this purpose,
we propose an innovative gradient-based attack method, empowered by the
pairwise objective function, to generate adversarial triggers, which causes
premeditated disorderliness with very few tokens. To equip the trigger
camouflages, we add the next sentence prediction loss and the language model
fluency constraint to the objective function. Experimental results on passage
ranking demonstrate the effectiveness of the ranking imitation attack model and
adversarial triggers against various SOTA neural ranking models. Furthermore,
various mitigation analyses and human evaluation show the effectiveness of
camouflages when facing potential mitigation approaches. To motivate other
scholars to further investigate this novel and important problem, we make the
experiment data and code publicly available.Comment: 15 pages, 4 figures, accepted by ACM CCS 2022, Best Paper Nominatio
- …