33 research outputs found

    A comprehensive study of vector leptoquark with U(1)B3L2U(1)_{B_3-L_2} on the BB-meson and Muon g-2 anomalies

    Full text link
    Recently reported anomalies in various BB meson decays and also in the anomalous magnetic moment of muon (g2)μ(g-2)_\mu motivate us to consider a particular extension of the standard model incorporating new interactions in lepton and quark sectors simultaneously. Our minimal choice would be leptoquark. In particular, we take vector leptoquark (U1U_1) and comprehensively study all related observables including ${(g-2)_{\mu}},\ R_{K^{(*)}},\ R_{D^{(*)}},, B \to (K) \ell \ell' where where \ell\ell'arevariouscombinationsof are various combinations of \muand and \tau,andalsoleptonflavorviolationinthe, and also lepton flavor violation in the \taudecays.Wefindthatahybridscenariowithadditional decays. We find that a hybrid scenario with additional U(1)_{B_3-L_2}$ gauge boson provides a common explanation of all these anomalies.Comment: 16 pages, 3 figure

    KoSBi: A Dataset for Mitigating Social Bias Risks Towards Safer Large Language Model Application

    Full text link
    Large language models (LLMs) learn not only natural text generation abilities but also social biases against different demographic groups from real-world data. This poses a critical risk when deploying LLM-based applications. Existing research and resources are not readily applicable in South Korea due to the differences in language and culture, both of which significantly affect the biases and targeted demographic groups. This limitation requires localized social bias datasets to ensure the safe and effective deployment of LLMs. To this end, we present KO SB I, a new social bias dataset of 34k pairs of contexts and sentences in Korean covering 72 demographic groups in 15 categories. We find that through filtering-based moderation, social biases in generated content can be reduced by 16.47%p on average for HyperCLOVA (30B and 82B), and GPT-3.Comment: 17 pages, 8 figures, 12 tables, ACL 202

    SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration

    Full text link
    The potential social harms that large language models pose, such as generating offensive content and reinforcing biases, are steeply rising. Existing works focus on coping with this concern while interacting with ill-intentioned users, such as those who explicitly make hate speech or elicit harmful responses. However, discussions on sensitive issues can become toxic even if the users are well-intentioned. For safer models in such scenarios, we present the Sensitive Questions and Acceptable Response (SQuARe) dataset, a large-scale Korean dataset of 49k sensitive questions with 42k acceptable and 46k non-acceptable responses. The dataset was constructed leveraging HyperCLOVA in a human-in-the-loop manner based on real news headlines. Experiments show that acceptable response generation significantly improves for HyperCLOVA and GPT-3, demonstrating the efficacy of this dataset.Comment: 19 pages, 10 figures, ACL 202

    Upgrade of Online Storage and Express-Reconstruction System for the Belle II experiment

    Get PDF
    The backend of the Belle II data acquisition system consists of a high-level trigger system, online storage, and an express-reconstruction system for online data processing. The high-level trigger system was updated to use the ZeroMQ networking library from the old ring buffer and TCP/IP socket, and the new system is successfully operated. However, the online storage and express-reconstruction system use the old type of data transportation. For future maintainability, we expand the same ZeroMQ library-based system to the online storage and express-reconstruction system. At the same time, we introduce two more updates in the backend system. First, online side raw data output becomes compressed ROOT format which is the official format of the Belle II data. The update helps to reduce the bandwidth of the online to offline data transfer and offline-side computing resource usage for data format conversion and compression. Second, high-level trigger output-based event selection is included in the online storage. The event selection allows more statistics of data quality monitoring from the express-reconstruction system. In the presentation, we show the description and test result of the upgrade before applying it to the beam operation and data taking

    Improved HLT Framework for Belle II Experiment

    Get PDF
    The original Belle II HLT framework was formally upgraded replacing the old IPC based ring buffer with the ZeroMQ data transport to overcome the unexpected IPC locking problem. The new framework has been working stably in the beam run so far, but it lacks the capability to recover the processing fault without stopping the on-going data taking. In addition, the compatibility with the offline framework (basf2) was lost which was maintained in the original. In order to solve these, an improved core processing framework is developed based on original basf2, while keeping the existing ZeroMQ data transport between the servers unchanged. A new core framework zmq-basf2 is developed with a lock-free 1-to-N and N-to-1 data transport using the ZeroMQ IPC socket so that it keeps a 100% compatibility with the original ring-buffer based framework. When a processing fault occurs, the affected faulty event is salvaged from the input buffer and sent directly to the output using the ZeroMQ broadcast. The terminated process is automatically restarted without stopping data taking. This contribution describes the detail of the improved Belle II HLT framework with the result of the performance test in the real Belle II DAQ data flow
    corecore