31 research outputs found
A comprehensive study of vector leptoquark with on the -meson and Muon g-2 anomalies
Recently reported anomalies in various meson decays and also in the
anomalous magnetic moment of muon motivate us to consider a
particular extension of the standard model incorporating new interactions in
lepton and quark sectors simultaneously. Our minimal choice would be
leptoquark. In particular, we take vector leptoquark () and
comprehensively study all related observables including ${(g-2)_{\mu}},\
R_{K^{(*)}},\ R_{D^{(*)}}B \to (K) \ell \ell' \ell\ell'\mu\tau\tauU(1)_{B_3-L_2}$ gauge boson provides a common explanation of all these
anomalies.Comment: 16 pages, 3 figure
KoSBi: A Dataset for Mitigating Social Bias Risks Towards Safer Large Language Model Application
Large language models (LLMs) learn not only natural text generation abilities
but also social biases against different demographic groups from real-world
data. This poses a critical risk when deploying LLM-based applications.
Existing research and resources are not readily applicable in South Korea due
to the differences in language and culture, both of which significantly affect
the biases and targeted demographic groups. This limitation requires localized
social bias datasets to ensure the safe and effective deployment of LLMs. To
this end, we present KO SB I, a new social bias dataset of 34k pairs of
contexts and sentences in Korean covering 72 demographic groups in 15
categories. We find that through filtering-based moderation, social biases in
generated content can be reduced by 16.47%p on average for HyperCLOVA (30B and
82B), and GPT-3.Comment: 17 pages, 8 figures, 12 tables, ACL 202
SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration
The potential social harms that large language models pose, such as
generating offensive content and reinforcing biases, are steeply rising.
Existing works focus on coping with this concern while interacting with
ill-intentioned users, such as those who explicitly make hate speech or elicit
harmful responses. However, discussions on sensitive issues can become toxic
even if the users are well-intentioned. For safer models in such scenarios, we
present the Sensitive Questions and Acceptable Response (SQuARe) dataset, a
large-scale Korean dataset of 49k sensitive questions with 42k acceptable and
46k non-acceptable responses. The dataset was constructed leveraging HyperCLOVA
in a human-in-the-loop manner based on real news headlines. Experiments show
that acceptable response generation significantly improves for HyperCLOVA and
GPT-3, demonstrating the efficacy of this dataset.Comment: 19 pages, 10 figures, ACL 202
Enzyme-Triggered Depolymerization of Polymeric Micelles for Targeted Anticancer Therapy
1
Enzyme-responsive Polymeric Micelles by Controlled Depolymerization for Anticancer Drug Delivery
2
Disassembly of Polymeric Micelles by Enzyme-Triggered Depolymerization for Targeted Tumor Therapy
2