319 research outputs found
Opekline izazvane curenjem monokloroctene kiseline u kemijskom pogonu ā prikaz sluÄaja
The patient, a 45-year-old male chemical factory worker, was burned by monochloroacetic acid discharged from a ruptured pipe. The patient was merely flushed with water and did not leave the workplace immediately. As a result, he suffered local burn symptoms, which gradually worsened. Two and a half hours after the accident, he developed symptoms of systemic poisoning, such as lethargy and dyspnoea. After a thorough debridement of the wound surface and subsequent skin grafting combined with early glucocorticoid therapy and haemofiltration, a satisfactory result was achieved, and the patient eventually recovered. With the widespread use of monochloroacetic acid in China, incidents of poisoning with this chemical are becoming increasingly common, with more than 100 cases reported in the past ten years in China alone.Radnik u kemijskoj tvornici u dobi od 45 godina zadobio je opekline izazvane monokloroctenom kiselinom koja je iscurila iz napukle cijevi. Ranu je samo isprao vodom i nije odmah napustio radno mjesto. Zbog toga je imao lokalizirane simptome opekline koji su se s vremenom pogorÅ”avali. Dva i pol sata nakon nesreÄe pojavili su se simptomi sistemskog otrovanja poput letargije i zaduhe (dispneje). Cjelovito uklanjanje oÅ”teÄenoga tkiva s povrÅ”ine i presaÄivanje kože u kombinaciji s ranim lijeÄenjem glukokortikosteroidom i hemofiltracijom bilo je uspjeÅ”no i bolesnik se naposljetku oporavio. S raÅ”irenom primjenom monokloroctene kiseline poveÄan je i broj otrovanja, koji je u posljednjih deset godina samo u Kini dosegnuo broj veÄi od 100 sluÄajeva
AE-GPT: Using Large Language Models to Extract Adverse Events from Surveillance Reports-A Use Case with Influenza Vaccine Adverse Events
Though Vaccines are instrumental in global health, mitigating infectious
diseases and pandemic outbreaks, they can occasionally lead to adverse events
(AEs). Recently, Large Language Models (LLMs) have shown promise in effectively
identifying and cataloging AEs within clinical reports. Utilizing data from the
Vaccine Adverse Event Reporting System (VAERS) from 1990 to 2016, this study
particularly focuses on AEs to evaluate LLMs' capability for AE extraction. A
variety of prevalent LLMs, including GPT-2, GPT-3 variants, GPT-4, and Llama 2,
were evaluated using Influenza vaccine as a use case. The fine-tuned GPT 3.5
model (AE-GPT) stood out with a 0.704 averaged micro F1 score for strict match
and 0.816 for relaxed match. The encouraging performance of the AE-GPT
underscores LLMs' potential in processing medical data, indicating a
significant stride towards advanced AE detection, thus presumably generalizable
to other AE extraction tasks
Black-box Dataset Ownership Verification via Backdoor Watermarking
Deep learning, especially deep neural networks (DNNs), has been widely and
successfully adopted in many critical applications for its high effectiveness
and efficiency. The rapid development of DNNs has benefited from the existence
of some high-quality datasets (, ImageNet), which allow researchers and
developers to easily verify the performance of their methods. Currently, almost
all existing released datasets require that they can only be adopted for
academic or educational purposes rather than commercial purposes without
permission. However, there is still no good way to ensure that. In this paper,
we formulate the protection of released datasets as verifying whether they are
adopted for training a (suspicious) third-party model, where defenders can only
query the model while having no information about its parameters and training
details. Based on this formulation, we propose to embed external patterns via
backdoor watermarking for the ownership verification to protect them. Our
method contains two main parts, including dataset watermarking and dataset
verification. Specifically, we exploit poison-only backdoor attacks (,
BadNets) for dataset watermarking and design a hypothesis-test-guided method
for dataset verification. We also provide some theoretical analyses of our
methods. Experiments on multiple benchmark datasets of different tasks are
conducted, which verify the effectiveness of our method. The code for
reproducing main experiments is available at
\url{https://github.com/THUYimingLi/DVBW}.Comment: This paper is accepted by IEEE TIFS. 15 pages. The preliminary short
version of this paper was posted on arXiv (arXiv:2010.05821) and presented in
a non-archival NeurIPS Workshop (2020
Towards Robust Model Watermark via Reducing Parametric Vulnerability
Deep neural networks are valuable assets considering their commercial
benefits and huge demands for costly annotation and computation resources. To
protect the copyright of DNNs, backdoor-based ownership verification becomes
popular recently, in which the model owner can watermark the model by embedding
a specific backdoor behavior before releasing it. The defenders (usually the
model owners) can identify whether a suspicious third-party model is ``stolen''
from them based on the presence of the behavior. Unfortunately, these
watermarks are proven to be vulnerable to removal attacks even like
fine-tuning. To further explore this vulnerability, we investigate the
parameter space and find there exist many watermark-removed models in the
vicinity of the watermarked one, which may be easily used by removal attacks.
Inspired by this finding, we propose a mini-max formulation to find these
watermark-removed models and recover their watermark behavior. Extensive
experiments demonstrate that our method improves the robustness of the model
watermarking against parametric changes and numerous watermark-removal attacks.
The codes for reproducing our main experiments are available at
\url{https://github.com/GuanhaoGan/robust-model-watermarking}.Comment: This paper is accepted by ICCV 202
- ā¦