99 research outputs found

    Two-dimensional representations of the genus two surface group

    Full text link
    Let Π\Pi denote the fundamental group of the closed surface of genus 2. For any quadratically closed ring RR with 22 invertible, we classify irreducible representations Π→SL(2,R)\Pi\to{\rm SL}(2,R) up to conjugacy by giving explicit formulas.Comment: 6 page

    High-speed photon correlation monitoring of amplified quantum noise by chaos using deep-learning balanced homodyne detection

    Full text link
    Precision experimental determination of photon correlation requires the massive amounts of data and extensive measurement time. We present a technique to monitor second-order photon correlation g(2)(0)g^{(2)}(0) of amplified quantum noise based on wideband balanced homodyne detection and deep-learning acceleration. The quantum noise is effectively amplified by an injection of weak chaotic laser and the g(2)(0)g^{(2)}(0) of the amplified quantum noise is measured with a real-time sample rate of 1.4 GHz. We also exploit a photon correlation convolutional neural network accelerating correlation data using a few quadrature fluctuations to perform a parallel processing of the g(2)(0)g^{(2)}(0) for various chaos injection intensities and effective bandwidths. The deep-learning method accelerates the g(2)(0)g^{(2)}(0) experimental acquisition with a high accuracy, estimating 6107 sets of photon correlation data with a mean square error of 0.002 in 22 seconds and achieving a three orders of magnitude acceleration in data acquisition time. This technique contributes to a high-speed and precision coherence evaluation of entropy source in secure communication and quantum imaging.Comment: 6 pages, 6 figure

    Breaking Language Barriers in Multilingual Mathematical Reasoning: Insights and Observations

    Full text link
    Existing research predominantly focuses on developing powerful language learning models (LLMs) for mathematical reasoning within monolingual languages, with few explorations in preserving efficacy in a multilingual context. To bridge this gap, this paper pioneers exploring and training powerful Multilingual Math Reasoning (xMR) LLMs. Firstly, by utilizing translation, we construct the first multilingual math reasoning instruction dataset, MGSM8KInstruct, encompassing ten distinct languages, thus addressing the issue of training data scarcity in xMR tasks. Based on the collected dataset, we propose different training strategies to build powerful xMR LLMs, named MathOctopus, notably outperform conventional open-source LLMs and exhibit superiority over ChatGPT in few-shot scenarios. Notably, MathOctopus-13B reaches 47.6% accuracy which exceeds ChatGPT 46.3% on MGSM testset. Beyond remarkable results, we unearth several pivotal observations and insights from extensive experiments: (1) When extending the rejection sampling strategy to the multilingual context, it proves effective for model performances, albeit limited. (2) Employing parallel corpora for math Supervised Fine-Tuning (SFT) across multiple languages not only significantly enhances model performance multilingually but also elevates their monolingual performance. This indicates that crafting multilingual corpora can be regarded as a vital strategy for enhancing model performance in a specific language, especially in mathematical reasoning tasks. For instance, MathOctopus-7B improves its counterparts that trained on English from 42.2% to 50.8% on GSM8K testset.Comment: Work in Progres

    Efficacy and safety of consolidation durvalumab after chemoradiation therapy for stage III non-small-cell lung cancer: a systematic review, meta-analysis, and meta-regression of real-world studies

    Get PDF
    Background: The current review aimed to pool real-world evidence on the efficacy and toxicity of consolidation durvalumab for stage III unresectable non-small cell lung cancer (NSCLC) after curative chemoradiotherapy.Methods: PubMed, CENTRAL, ScienceDirect, Embase, and Google Scholar were searched for observational studies reporting the use of durvalumab for NSCLC till 12th April 2022. Twenty-three studies with 4,400 patients were included.Results: The pooled 1-year overall survival (OS) and progression-free survival rates (PFS) were 85% (95% CI: 81%–89%) and 60% (95% CI: 56%–64%) respectively. Pooled incidence of all-grade pneumonitis, grade ≥3 pneumonitis and discontinuation of durvalumab due to pneumonitis were 27% (95% CI: 19%–36%), 8% (95% CI: 6%–10%) and 17% (95% CI: 12%–23%) respectively. The pooled proportion of patients experiencing endocrine, cutaneous, musculoskeletal, and gastrointestinal adverse events was 11% (95% CI: 7%–18%), 8% (95% CI: 3%–17%), 5% (95% CI: 3%–6%), and 6% (95% CI: 3%–12%), respectively.Conclusion: Meta-regression indicated that performance status significantly influenced PFS, while age, time to durvalumab, and programmed death-ligand 1 status significantly affected pneumonitis rates. Real-world evidence suggests that the short-term efficacy and safety of durvalumab are consistent with that of the PACIFIC trial. The congruence of results lends support to durvalumab use in improving outcomes of unresectable stage III NSCLC.Systematic Review Registration:https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022324663, identifier CRD42022324663

    Selective Pre-training for Private Fine-tuning

    Full text link
    Suppose we want to train text prediction models in email clients or word processors. The models must preserve the privacy of user data and adhere to a specific fixed size to meet memory and inference time requirements. We introduce a generic framework to solve this problem. Specifically, we are given a public dataset DpubD_\text{pub} and a private dataset DprivD_\text{priv} corresponding to a downstream task TT. How should we pre-train a fixed-size model MM on DpubD_\text{pub} and fine-tune it on DprivD_\text{priv} such that performance of MM with respect to TT is maximized and MM satisfies differential privacy with respect to DprivD_\text{priv}? We show that pre-training on a {\em subset} of dataset DpubD_\text{pub} that brings the public distribution closer to the private distribution is a crucial ingredient to maximize the transfer learning abilities of MM after pre-training, especially in the regimes where model sizes are relatively small. Besides performance improvements, our framework also shows that with careful pre-training and private fine-tuning, {\em smaller models} can match the performance of much larger models, highlighting the promise of differentially private training as a tool for model compression and efficiency
    • …
    corecore