55 research outputs found

    Correction to: Mining a stroke knowledge graph from literature

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: registration 2021-11-30, collection 2021-12, pub-electronic 2021-12-08, online 2021-12-08Publication status: Publishe

    Mining a stroke knowledge graph from literature

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: collection 2021-05, received 2021-06-13, accepted 2021-07-06, registration 2021-07-09, pub-electronic 2021-07-29, online 2021-07-29Publication status: PublishedFunder: National High-level Personnel for Defense Technology Program; Grant(s): (2017-JCJQ-ZQ-013), and NSF 61902405Funder: the national key r&d project by ministry of science and technology of china; Grant(s): 2018YFB1003203Funder: the open fund from the State Key Laboratory of High Performance Computing; Grant(s): No. 201901-11Funder: National Science Foundation of China; Grant(s): U1811462Abstract: Background: Stroke has an acute onset and a high mortality rate, making it one of the most fatal diseases worldwide. Its underlying biology and treatments have been widely studied both in the “Western” biomedicine and the Traditional Chinese Medicine (TCM). However, these two approaches are often studied and reported in insolation, both in the literature and associated databases. Results: To aid research in finding effective prevention methods and treatments, we integrated knowledge from the literature and a number of databases (e.g. CID, TCMID, ETCM). We employed a suite of biomedical text mining (i.e. named-entity) approaches to identify mentions of genes, diseases, drugs, chemicals, symptoms, Chinese herbs and patent medicines, etc. in a large set of stroke papers from both biomedical and TCM domains. Then, using a combination of a rule-based approach with a pre-trained BioBERT model, we extracted and classified links and relationships among stroke-related entities as expressed in the literature. We construct StrokeKG, a knowledge graph includes almost 46 k nodes of nine types, and 157 k links of 30 types, connecting diseases, genes, symptoms, drugs, pathways, herbs, chemical, ingredients and patent medicine. Conclusions: Our Stroke-KG can provide practical and reliable stroke-related knowledge to help with stroke-related research like exploring new directions for stroke research and ideas for drug repurposing and discovery. We make StrokeKG freely available at http://114.115.208.144:7474/browser/ (Please click "Connect" directly) and the source structured data for stroke at https://github.com/yangxi1016/Strok

    LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors

    Full text link
    Prompt-tuning has emerged as an attractive paradigm for deploying large-scale language models due to its strong downstream task performance and efficient multitask serving ability. Despite its wide adoption, we empirically show that prompt-tuning is vulnerable to downstream task-agnostic backdoors, which reside in the pretrained models and can affect arbitrary downstream tasks. The state-of-the-art backdoor detection approaches cannot defend against task-agnostic backdoors since they hardly converge in reversing the backdoor triggers. To address this issue, we propose LMSanitator, a novel approach for detecting and removing task-agnostic backdoors on Transformer models. Instead of directly inverting the triggers, LMSanitator aims to invert the predefined attack vectors (pretrained models' output when the input is embedded with triggers) of the task-agnostic backdoors, which achieves much better convergence performance and backdoor detection accuracy. LMSanitator further leverages prompt-tuning's property of freezing the pretrained model to perform accurate and fast output monitoring and input purging during the inference phase. Extensive experiments on multiple language models and NLP tasks illustrate the effectiveness of LMSanitator. For instance, LMSanitator achieves 92.8% backdoor detection accuracy on 960 models and decreases the attack success rate to less than 1% in most scenarios.Comment: To Appear in the Network and Distributed System Security (NDSS) Symposium 2024, 26 February - 1 March 2024, San Diego, CA, USA; typos correcte

    High-performance and Scalable Software-based NVMe Virtualization Mechanism with I/O Queues Passthrough

    Full text link
    NVMe(Non-Volatile Memory Express) is an industry standard for solid-state drives (SSDs) that has been widely adopted in data centers. NVMe virtualization is crucial in cloud computing as it allows for virtualized NVMe devices to be used by virtual machines (VMs), thereby improving the utilization of storage resources. However, traditional software-based solutions have flexibility benefits but often come at the cost of performance degradation or high CPU overhead. On the other hand, hardware-assisted solutions offer high performance and low CPU usage, but their adoption is often limited by the need for special hardware support or the requirement for new hardware development. In this paper, we propose LightIOV, a novel software-based NVMe virtualization mechanism that achieves high performance and scalability without consuming valuable CPU resources and without requiring special hardware support. LightIOV can support thousands of VMs on each server. The key idea behind LightIOV is NVMe hardware I/O queues passthrough, which enables VMs to directly access I/O queues of NVMe devices, thus eliminating virtualization overhead and providing near-native performance. Results from our experiments show that LightIOV can provide comparable performance to VFIO, with an IOPS of 97.6%-100.2% of VFIO. Furthermore, in high-density VMs environments, LightIOV achieves 31.4% lower latency than SPDK-Vhost when running 200 VMs, and an improvement of 27.1% in OPS performance in real-world applications

    cuZK: Accelerating Zero-Knowledge Proof with A Faster Parallel Multi-Scalar Multiplication Algorithm on GPUs

    Get PDF
    Zero-knowledge proof is a critical cryptographic primitive. Its most practical type, called zero-knowledge Succinct Non-interactive ARgument of Knowledge (zkSNARK), has been deployed in various privacy-preserving applications such as cryptocurrencies and verifiable machine learning. Unfortunately, zkSNARK like Groth16 has a high overhead on its proof generation step, which consists of several time-consuming operations, including large-scale matrix-vector multiplication (MUL), number-theoretic transform (NTT), and multi-scalar multiplication (MSM). Therefore, this paper presents cuZK, an efficient GPU implementation of zkSNARK with the following three techniques to achieve high performance. First, we propose a new parallel MSM algorithm. This MSM algorithm achieves nearly perfect linear speedup over the Pippenger algorithm, a well-known serial MSM algorithm. Second, we parallelize the MUL operation. Along with our self-designed MSM scheme and well-studied NTT scheme, cuZK achieves the parallelization of all operations in the proof generation step. Third, cuZK reduces the latency overhead caused by CPU-GPU data transfer by 1) reducing redundant data transfer and 2) overlapping data transfer and device computation. The evaluation results show that our MSM module provides over 2.08Ă— (up to 2.94Ă—) speedup versus the state-of-the-art GPU implementation. cuZK achieves over 2.65Ă— (up to 4.86Ă—) speedup on standard benchmarks and 2.18Ă— speedup on a GPU-accelerated cryptocurrency application, Filecoin

    Emittance Measurements of Trapped Electrons from a Plasma Wakefield Accelerator

    Get PDF
    Recent electron beam driven plasma wakefield accelerator experiments carried out at SLAC showed trapping of plasma electrons. These trapped electrons appeared on an energy spectrometer with smaller transverse size than the beam driving the wake. A connection is made between transverse size and emittance; due to the spectrometer's resolution, this connection allows for placing an upper limit on the trapped electron emittance. The upper limit for the lowest normalized emittance measured in the experiment is 1 mm {center_dot} mrad
    • …
    corecore