126 research outputs found

    Concatenation of the Gottesman-Kitaev-Preskill code with the XZZX surface code

    Full text link
    Bosonic codes provide an alternative option for quantum error correction. An important category of bosonic codes called the Gottesman-Kitaev-Preskill (GKP) code has aroused much interest recently. Theoretically, the error correction ability of GKP code is limited since it can only correct small shift errors in position and momentum quadratures. A natural approach to promote the GKP error correction for large-scale, fault-tolerant quantum computation is concatenating encoded GKP states with a stabilizer code. The performance of the XZZX surface-GKP code, i.e., the single-mode GKP code concatenated with the XZZX surface code is investigated in this paper under two different noise models. Firstly, in the code-capacity noise model, the asymmetric rectangular GKP code with parameter λ\lambda is introduced. Using the minimum weight perfect matching decoder combined with the continuous-variable GKP information, the optimal threshold of the XZZX-surface GKP code reaches σ0.67\sigma\approx0.67 when λ=2.1\lambda=2.1, compared with the threshold σ0.60\sigma\approx0.60 of the standard surface-GKP code. Secondly, we analyze the shift errors of two-qubit gates in the actual implementation and build the full circuit-level noise model. By setting the appropriate bias parameters, the logical error rate is reduced by several times in some cases. These results indicate the XZZX surface-GKP codes are more suitable for asymmetric concatenation under the general noise models. We also estimate the overhead of the XZZX-surface GKP code which uses about 291 GKP states with the noise parameter 18.5 dB (κ/g0.71%\kappa/g \approx 0.71\%) to encode a logical qubit with the error rate 2.53×1072.53\times10^{-7}, compared with the qubit-based surface code using 3041 qubits to achieve almost the same logical error rate.Comment: 17 pages, 10 figure

    Balanced Coarsening for Multilevel Hypergraph Partitioning via Wasserstein Discrepancy

    Full text link
    We propose a balanced coarsening scheme for multilevel hypergraph partitioning. In addition, an initial partitioning algorithm is designed to improve the quality of k-way hypergraph partitioning. By assigning vertex weights through the LPT algorithm, we generate a prior hypergraph under a relaxed balance constraint. With the prior hypergraph, we have defined the Wasserstein discrepancy to coordinate the optimal transport of coarsening process. And the optimal transport matrix is solved by Sinkhorn algorithm. Our coarsening scheme fully takes into account the minimization of connectivity metric (objective function). For the initial partitioning stage, we define a normalized cut function induced by Fiedler vector, which is theoretically proved to be a concave function. Thereby, a three-point algorithm is designed to find the best cut under the balance constraint

    DFL: High-Performance Blockchain-Based Federated Learning

    Full text link
    Many researchers are trying to replace the aggregation server in federated learning with a blockchain system to achieve better privacy, robustness and scalability. In this case, clients will upload their updated models to the blockchain ledger, and use a smart contract on the blockchain system to perform model averaging. However, running machine learning applications on the blockchain is almost impossible because a blockchain system, which usually takes over half minute to generate a block, is extremely slow and unable to support machine learning applications. This paper proposes a completely new public blockchain architecture called DFL, which is specially optimized for distributed federated machine learning. This architecture inherits most traditional blockchain merits and achieves extremely high performance with low resource consumption by waiving global consensus. To characterize the performance and robustness of our architecture, we implement the architecture as a prototype and test it on a physical four-node network. To test more nodes and more complex situations, we build a simulator to simulate the network. The LeNet results indicate our system can reach over 90% accuracy for non-I.I.D. datasets even while facing model poisoning attacks, with the blockchain consuming less than 5% of hardware resources.Comment: 11 pages, 17 figure

    Evaluating Large Language Models: A Comprehensive Survey

    Full text link
    Large language models (LLMs) have demonstrated remarkable capabilities across a broad spectrum of tasks. They have attracted significant attention and been deployed in numerous downstream applications. Nevertheless, akin to a double-edged sword, LLMs also present potential risks. They could suffer from private data leaks or yield inappropriate, harmful, or misleading content. Additionally, the rapid progress of LLMs raises concerns about the potential emergence of superintelligent systems without adequate safeguards. To effectively capitalize on LLM capacities as well as ensure their safe and beneficial development, it is critical to conduct a rigorous and comprehensive evaluation of LLMs. This survey endeavors to offer a panoramic perspective on the evaluation of LLMs. We categorize the evaluation of LLMs into three major groups: knowledge and capability evaluation, alignment evaluation and safety evaluation. In addition to the comprehensive review on the evaluation methodologies and benchmarks on these three aspects, we collate a compendium of evaluations pertaining to LLMs' performance in specialized domains, and discuss the construction of comprehensive evaluation platforms that cover LLM evaluations on capabilities, alignment, safety, and applicability. We hope that this comprehensive overview will stimulate further research interests in the evaluation of LLMs, with the ultimate goal of making evaluation serve as a cornerstone in guiding the responsible development of LLMs. We envision that this will channel their evolution into a direction that maximizes societal benefit while minimizing potential risks. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLMs-Evaluation-Papers.Comment: 111 page

    MXene nanomaterials in biomedicine: A bibliometric perspective

    Get PDF
    Purpose: MXene is two-dimensional (2D) nanomaterials that comprise transition metal carbides, nitrides, and carbonitrides. Their unique nanostructure attributes it a special role in medical applications. However, bibliometric studies have not been conducted in this field. Therefore, the aim of the present study was to conduct a bibliometric analysis to evaluate the global scientific output of MXene in biomedical research, explore the current situation of this field in the past years and predicte its research hotpots.Methods: We utilized visual analysis softwares Citespace and Bibliometrix to analyze all relevant documents published in the period of 2011–2022. The bibliometric records were obtained from the Web of Science Core Collection.Results: A total of 1,489 publications were analyzed in this study. We observed that China is the country with the largest number of publications, with Sichuan University being the institution with the highest number of publications in this field. The most publications on MXene medicine research in the past year were found primarily in journals about Chemistry/Materials/Physics. Moreover, ACS Applied Materials and Interfaces was found to be the most productive journal in this field. Co-cited references and keyword cluster analysis revealed that #antibacterial# and #photothermal therapy# are the research focus keyword and burst detection suggested that driven wearable electronics were newly-emergent research hot spots.Conclusion: Our bibliometric analysis indicates that research on MXene medical application remains an active field of study. At present, the research focus is on the application of MXene in the field of antibacterial taking advantage of its photothermal properties. In the future, wearable electronics is the research direction of MXene medical application

    Detection of the Diffuse Supernova Neutrino Background with JUNO

    Get PDF
    As an underground multi-purpose neutrino detector with 20 kton liquid scintillator, Jiangmen Underground Neutrino Observatory (JUNO) is competitive with and complementary to the water-Cherenkov detectors on the search for the diffuse supernova neutrino background (DSNB). Typical supernova models predict 2-4 events per year within the optimal observation window in the JUNO detector. The dominant background is from the neutral-current (NC) interaction of atmospheric neutrinos with 12C nuclei, which surpasses the DSNB by more than one order of magnitude. We evaluated the systematic uncertainty of NC background from the spread of a variety of data-driven models and further developed a method to determine NC background within 15\% with {\it{in}} {\it{situ}} measurements after ten years of running. Besides, the NC-like backgrounds can be effectively suppressed by the intrinsic pulse-shape discrimination (PSD) capabilities of liquid scintillators. In this talk, I will present in detail the improvements on NC background uncertainty evaluation, PSD discriminator development, and finally, the potential of DSNB sensitivity in JUNO

    Potential of Core-Collapse Supernova Neutrino Detection at JUNO

    Get PDF
    JUNO is an underground neutrino observatory under construction in Jiangmen, China. It uses 20kton liquid scintillator as target, which enables it to detect supernova burst neutrinos of a large statistics for the next galactic core-collapse supernova (CCSN) and also pre-supernova neutrinos from the nearby CCSN progenitors. All flavors of supernova burst neutrinos can be detected by JUNO via several interaction channels, including inverse beta decay, elastic scattering on electron and proton, interactions on C12 nuclei, etc. This retains the possibility for JUNO to reconstruct the energy spectra of supernova burst neutrinos of all flavors. The real time monitoring systems based on FPGA and DAQ are under development in JUNO, which allow prompt alert and trigger-less data acquisition of CCSN events. The alert performances of both monitoring systems have been thoroughly studied using simulations. Moreover, once a CCSN is tagged, the system can give fast characterizations, such as directionality and light curve

    Real-time Monitoring for the Next Core-Collapse Supernova in JUNO

    Full text link
    Core-collapse supernova (CCSN) is one of the most energetic astrophysical events in the Universe. The early and prompt detection of neutrinos before (pre-SN) and during the SN burst is a unique opportunity to realize the multi-messenger observation of the CCSN events. In this work, we describe the monitoring concept and present the sensitivity of the system to the pre-SN and SN neutrinos at the Jiangmen Underground Neutrino Observatory (JUNO), which is a 20 kton liquid scintillator detector under construction in South China. The real-time monitoring system is designed with both the prompt monitors on the electronic board and online monitors at the data acquisition stage, in order to ensure both the alert speed and alert coverage of progenitor stars. By assuming a false alert rate of 1 per year, this monitoring system can be sensitive to the pre-SN neutrinos up to the distance of about 1.6 (0.9) kpc and SN neutrinos up to about 370 (360) kpc for a progenitor mass of 30MM_{\odot} for the case of normal (inverted) mass ordering. The pointing ability of the CCSN is evaluated by using the accumulated event anisotropy of the inverse beta decay interactions from pre-SN or SN neutrinos, which, along with the early alert, can play important roles for the followup multi-messenger observations of the next Galactic or nearby extragalactic CCSN.Comment: 24 pages, 9 figure
    corecore